Test Report: KVM_Linux_crio 21934

                    
                      0ee4f00f81c855d6dbc5c3cb2cb1b494940d38dc:2025-11-22:42437
                    
                

Test fail (12/345)

x
+
TestAddons/parallel/Ingress (159.73s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-266876 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-266876 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-266876 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [9e3ed8c7-5788-4d41-aba1-71043fc65fb1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [9e3ed8c7-5788-4d41-aba1-71043fc65fb1] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.003799602s
I1121 23:49:27.749455  250664 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-266876 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-266876 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m16.490316003s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-266876 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-266876 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.50
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-266876 -n addons-266876
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-266876 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-266876 logs -n 25: (1.345055079s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-263491 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-263491 │ jenkins │ v1.37.0 │ 21 Nov 25 23:46 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ minikube             │ jenkins │ v1.37.0 │ 21 Nov 25 23:46 UTC │ 21 Nov 25 23:46 UTC │
	│ delete  │ -p download-only-263491                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-263491 │ jenkins │ v1.37.0 │ 21 Nov 25 23:46 UTC │ 21 Nov 25 23:46 UTC │
	│ delete  │ -p download-only-246895                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-246895 │ jenkins │ v1.37.0 │ 21 Nov 25 23:46 UTC │ 21 Nov 25 23:46 UTC │
	│ delete  │ -p download-only-263491                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-263491 │ jenkins │ v1.37.0 │ 21 Nov 25 23:46 UTC │ 21 Nov 25 23:46 UTC │
	│ start   │ --download-only -p binary-mirror-996598 --alsologtostderr --binary-mirror http://127.0.0.1:41123 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-996598 │ jenkins │ v1.37.0 │ 21 Nov 25 23:46 UTC │                     │
	│ delete  │ -p binary-mirror-996598                                                                                                                                                                                                                                                                                                                                                                                                                                                │ binary-mirror-996598 │ jenkins │ v1.37.0 │ 21 Nov 25 23:46 UTC │ 21 Nov 25 23:46 UTC │
	│ addons  │ enable dashboard -p addons-266876                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-266876        │ jenkins │ v1.37.0 │ 21 Nov 25 23:46 UTC │                     │
	│ addons  │ disable dashboard -p addons-266876                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-266876        │ jenkins │ v1.37.0 │ 21 Nov 25 23:46 UTC │                     │
	│ start   │ -p addons-266876 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-266876        │ jenkins │ v1.37.0 │ 21 Nov 25 23:46 UTC │ 21 Nov 25 23:48 UTC │
	│ addons  │ addons-266876 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-266876        │ jenkins │ v1.37.0 │ 21 Nov 25 23:48 UTC │ 21 Nov 25 23:48 UTC │
	│ addons  │ addons-266876 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-266876        │ jenkins │ v1.37.0 │ 21 Nov 25 23:49 UTC │ 21 Nov 25 23:49 UTC │
	│ addons  │ enable headlamp -p addons-266876 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-266876        │ jenkins │ v1.37.0 │ 21 Nov 25 23:49 UTC │ 21 Nov 25 23:49 UTC │
	│ addons  │ addons-266876 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-266876        │ jenkins │ v1.37.0 │ 21 Nov 25 23:49 UTC │ 21 Nov 25 23:49 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-266876                                                                                                                                                                                                                                                                                                                                                                                         │ addons-266876        │ jenkins │ v1.37.0 │ 21 Nov 25 23:49 UTC │ 21 Nov 25 23:49 UTC │
	│ addons  │ addons-266876 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-266876        │ jenkins │ v1.37.0 │ 21 Nov 25 23:49 UTC │ 21 Nov 25 23:49 UTC │
	│ ssh     │ addons-266876 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                               │ addons-266876        │ jenkins │ v1.37.0 │ 21 Nov 25 23:49 UTC │                     │
	│ addons  │ addons-266876 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-266876        │ jenkins │ v1.37.0 │ 21 Nov 25 23:49 UTC │ 21 Nov 25 23:49 UTC │
	│ ip      │ addons-266876 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-266876        │ jenkins │ v1.37.0 │ 21 Nov 25 23:49 UTC │ 21 Nov 25 23:49 UTC │
	│ addons  │ addons-266876 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-266876        │ jenkins │ v1.37.0 │ 21 Nov 25 23:49 UTC │ 21 Nov 25 23:49 UTC │
	│ addons  │ addons-266876 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-266876        │ jenkins │ v1.37.0 │ 21 Nov 25 23:49 UTC │ 21 Nov 25 23:49 UTC │
	│ addons  │ addons-266876 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                               │ addons-266876        │ jenkins │ v1.37.0 │ 21 Nov 25 23:49 UTC │ 21 Nov 25 23:49 UTC │
	│ addons  │ addons-266876 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-266876        │ jenkins │ v1.37.0 │ 21 Nov 25 23:49 UTC │ 21 Nov 25 23:49 UTC │
	│ addons  │ addons-266876 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-266876        │ jenkins │ v1.37.0 │ 21 Nov 25 23:49 UTC │ 21 Nov 25 23:49 UTC │
	│ ip      │ addons-266876 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-266876        │ jenkins │ v1.37.0 │ 21 Nov 25 23:51 UTC │ 21 Nov 25 23:51 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 23:46:48
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 23:46:48.131095  251263 out.go:360] Setting OutFile to fd 1 ...
	I1121 23:46:48.131340  251263 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:46:48.131350  251263 out.go:374] Setting ErrFile to fd 2...
	I1121 23:46:48.131354  251263 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:46:48.131528  251263 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-244751/.minikube/bin
	I1121 23:46:48.132085  251263 out.go:368] Setting JSON to false
	I1121 23:46:48.132905  251263 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":26936,"bootTime":1763741872,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1121 23:46:48.132973  251263 start.go:143] virtualization: kvm guest
	I1121 23:46:48.134971  251263 out.go:179] * [addons-266876] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1121 23:46:48.136184  251263 notify.go:221] Checking for updates...
	I1121 23:46:48.136230  251263 out.go:179]   - MINIKUBE_LOCATION=21934
	I1121 23:46:48.137505  251263 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 23:46:48.138918  251263 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-244751/kubeconfig
	I1121 23:46:48.140232  251263 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-244751/.minikube
	I1121 23:46:48.141364  251263 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1121 23:46:48.142744  251263 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 23:46:48.144346  251263 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 23:46:48.178112  251263 out.go:179] * Using the kvm2 driver based on user configuration
	I1121 23:46:48.179144  251263 start.go:309] selected driver: kvm2
	I1121 23:46:48.179156  251263 start.go:930] validating driver "kvm2" against <nil>
	I1121 23:46:48.179168  251263 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 23:46:48.179919  251263 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1121 23:46:48.180166  251263 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 23:46:48.180191  251263 cni.go:84] Creating CNI manager for ""
	I1121 23:46:48.180267  251263 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1121 23:46:48.180276  251263 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1121 23:46:48.180323  251263 start.go:353] cluster config:
	{Name:addons-266876 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-266876 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1121 23:46:48.180438  251263 iso.go:125] acquiring lock: {Name:mkc83d3435f1eaa5a92358fc78f85b7d74048deb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 23:46:48.181860  251263 out.go:179] * Starting "addons-266876" primary control-plane node in "addons-266876" cluster
	I1121 23:46:48.182929  251263 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 23:46:48.182959  251263 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21934-244751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1121 23:46:48.182976  251263 cache.go:65] Caching tarball of preloaded images
	I1121 23:46:48.183059  251263 preload.go:238] Found /home/jenkins/minikube-integration/21934-244751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1121 23:46:48.183069  251263 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1121 23:46:48.183354  251263 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/config.json ...
	I1121 23:46:48.183376  251263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/config.json: {Name:mk0295453cd01463fa22b5d6c7388981c204c24d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:46:48.183507  251263 start.go:360] acquireMachinesLock for addons-266876: {Name:mk0193f6f5636a08cc7939c91649c5870e7698fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1121 23:46:48.183552  251263 start.go:364] duration metric: took 33.297µs to acquireMachinesLock for "addons-266876"
	I1121 23:46:48.183570  251263 start.go:93] Provisioning new machine with config: &{Name:addons-266876 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-266876 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1121 23:46:48.183614  251263 start.go:125] createHost starting for "" (driver="kvm2")
	I1121 23:46:48.185254  251263 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1121 23:46:48.185412  251263 start.go:159] libmachine.API.Create for "addons-266876" (driver="kvm2")
	I1121 23:46:48.185441  251263 client.go:173] LocalClient.Create starting
	I1121 23:46:48.185543  251263 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21934-244751/.minikube/certs/ca.pem
	I1121 23:46:48.249364  251263 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21934-244751/.minikube/certs/cert.pem
	I1121 23:46:48.566610  251263 main.go:143] libmachine: creating domain...
	I1121 23:46:48.566636  251263 main.go:143] libmachine: creating network...
	I1121 23:46:48.568191  251263 main.go:143] libmachine: found existing default network
	I1121 23:46:48.568404  251263 main.go:143] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1121 23:46:48.568892  251263 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e90440}
	I1121 23:46:48.569009  251263 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-266876</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1121 23:46:48.575044  251263 main.go:143] libmachine: creating private network mk-addons-266876 192.168.39.0/24...
	I1121 23:46:48.645727  251263 main.go:143] libmachine: private network mk-addons-266876 192.168.39.0/24 created
	I1121 23:46:48.646042  251263 main.go:143] libmachine: <network>
	  <name>mk-addons-266876</name>
	  <uuid>c503bc44-d3ea-47cf-b120-da4593d18380</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:80:0f:c2'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1121 23:46:48.646078  251263 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876 ...
	I1121 23:46:48.646103  251263 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/21934-244751/.minikube/cache/iso/amd64/minikube-v1.37.0-1763503576-21924-amd64.iso
	I1121 23:46:48.646114  251263 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/21934-244751/.minikube
	I1121 23:46:48.646192  251263 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/21934-244751/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21934-244751/.minikube/cache/iso/amd64/minikube-v1.37.0-1763503576-21924-amd64.iso...
	I1121 23:46:48.924945  251263 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa...
	I1121 23:46:48.947251  251263 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/addons-266876.rawdisk...
	I1121 23:46:48.947299  251263 main.go:143] libmachine: Writing magic tar header
	I1121 23:46:48.947321  251263 main.go:143] libmachine: Writing SSH key tar header
	I1121 23:46:48.947404  251263 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876 ...
	I1121 23:46:48.947463  251263 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876
	I1121 23:46:48.947488  251263 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876 (perms=drwx------)
	I1121 23:46:48.947500  251263 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21934-244751/.minikube/machines
	I1121 23:46:48.947510  251263 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21934-244751/.minikube/machines (perms=drwxr-xr-x)
	I1121 23:46:48.947521  251263 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21934-244751/.minikube
	I1121 23:46:48.947528  251263 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21934-244751/.minikube (perms=drwxr-xr-x)
	I1121 23:46:48.947540  251263 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21934-244751
	I1121 23:46:48.947549  251263 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21934-244751 (perms=drwxrwxr-x)
	I1121 23:46:48.947562  251263 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1121 23:46:48.947572  251263 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1121 23:46:48.947579  251263 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1121 23:46:48.947589  251263 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1121 23:46:48.947600  251263 main.go:143] libmachine: checking permissions on dir: /home
	I1121 23:46:48.947606  251263 main.go:143] libmachine: skipping /home - not owner
	I1121 23:46:48.947613  251263 main.go:143] libmachine: defining domain...
	I1121 23:46:48.949155  251263 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-266876</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/addons-266876.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-266876'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1121 23:46:48.954504  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:cb:01:39 in network default
	I1121 23:46:48.955203  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:46:48.955226  251263 main.go:143] libmachine: starting domain...
	I1121 23:46:48.955230  251263 main.go:143] libmachine: ensuring networks are active...
	I1121 23:46:48.956075  251263 main.go:143] libmachine: Ensuring network default is active
	I1121 23:46:48.956468  251263 main.go:143] libmachine: Ensuring network mk-addons-266876 is active
	I1121 23:46:48.957054  251263 main.go:143] libmachine: getting domain XML...
	I1121 23:46:48.958124  251263 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-266876</name>
	  <uuid>c4a95d5c-2715-4bec-8bc2-a50909bf4217</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/addons-266876.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:ab:5a:31'/>
	      <source network='mk-addons-266876'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:cb:01:39'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1121 23:46:50.230732  251263 main.go:143] libmachine: waiting for domain to start...
	I1121 23:46:50.232398  251263 main.go:143] libmachine: domain is now running
	I1121 23:46:50.232423  251263 main.go:143] libmachine: waiting for IP...
	I1121 23:46:50.233366  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:46:50.234245  251263 main.go:143] libmachine: no network interface addresses found for domain addons-266876 (source=lease)
	I1121 23:46:50.234266  251263 main.go:143] libmachine: trying to list again with source=arp
	I1121 23:46:50.234594  251263 main.go:143] libmachine: unable to find current IP address of domain addons-266876 in network mk-addons-266876 (interfaces detected: [])
	I1121 23:46:50.234654  251263 retry.go:31] will retry after 291.794239ms: waiting for domain to come up
	I1121 23:46:50.528283  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:46:50.528971  251263 main.go:143] libmachine: no network interface addresses found for domain addons-266876 (source=lease)
	I1121 23:46:50.528987  251263 main.go:143] libmachine: trying to list again with source=arp
	I1121 23:46:50.529342  251263 main.go:143] libmachine: unable to find current IP address of domain addons-266876 in network mk-addons-266876 (interfaces detected: [])
	I1121 23:46:50.529380  251263 retry.go:31] will retry after 351.305248ms: waiting for domain to come up
	I1121 23:46:50.882166  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:46:50.883099  251263 main.go:143] libmachine: no network interface addresses found for domain addons-266876 (source=lease)
	I1121 23:46:50.883122  251263 main.go:143] libmachine: trying to list again with source=arp
	I1121 23:46:50.883485  251263 main.go:143] libmachine: unable to find current IP address of domain addons-266876 in network mk-addons-266876 (interfaces detected: [])
	I1121 23:46:50.883531  251263 retry.go:31] will retry after 364.129033ms: waiting for domain to come up
	I1121 23:46:51.249389  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:46:51.250192  251263 main.go:143] libmachine: no network interface addresses found for domain addons-266876 (source=lease)
	I1121 23:46:51.250210  251263 main.go:143] libmachine: trying to list again with source=arp
	I1121 23:46:51.250511  251263 main.go:143] libmachine: unable to find current IP address of domain addons-266876 in network mk-addons-266876 (interfaces detected: [])
	I1121 23:46:51.250562  251263 retry.go:31] will retry after 385.747401ms: waiting for domain to come up
	I1121 23:46:51.638320  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:46:51.639301  251263 main.go:143] libmachine: no network interface addresses found for domain addons-266876 (source=lease)
	I1121 23:46:51.639319  251263 main.go:143] libmachine: trying to list again with source=arp
	I1121 23:46:51.639704  251263 main.go:143] libmachine: unable to find current IP address of domain addons-266876 in network mk-addons-266876 (interfaces detected: [])
	I1121 23:46:51.639759  251263 retry.go:31] will retry after 745.315642ms: waiting for domain to come up
	I1121 23:46:52.386579  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:46:52.387430  251263 main.go:143] libmachine: no network interface addresses found for domain addons-266876 (source=lease)
	I1121 23:46:52.387444  251263 main.go:143] libmachine: trying to list again with source=arp
	I1121 23:46:52.387845  251263 main.go:143] libmachine: unable to find current IP address of domain addons-266876 in network mk-addons-266876 (interfaces detected: [])
	I1121 23:46:52.387891  251263 retry.go:31] will retry after 692.465755ms: waiting for domain to come up
	I1121 23:46:53.081995  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:46:53.082882  251263 main.go:143] libmachine: no network interface addresses found for domain addons-266876 (source=lease)
	I1121 23:46:53.082899  251263 main.go:143] libmachine: trying to list again with source=arp
	I1121 23:46:53.083254  251263 main.go:143] libmachine: unable to find current IP address of domain addons-266876 in network mk-addons-266876 (interfaces detected: [])
	I1121 23:46:53.083289  251263 retry.go:31] will retry after 879.261574ms: waiting for domain to come up
	I1121 23:46:53.964041  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:46:53.964752  251263 main.go:143] libmachine: no network interface addresses found for domain addons-266876 (source=lease)
	I1121 23:46:53.964779  251263 main.go:143] libmachine: trying to list again with source=arp
	I1121 23:46:53.965086  251263 main.go:143] libmachine: unable to find current IP address of domain addons-266876 in network mk-addons-266876 (interfaces detected: [])
	I1121 23:46:53.965141  251263 retry.go:31] will retry after 1.461085566s: waiting for domain to come up
	I1121 23:46:55.428870  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:46:55.429589  251263 main.go:143] libmachine: no network interface addresses found for domain addons-266876 (source=lease)
	I1121 23:46:55.429605  251263 main.go:143] libmachine: trying to list again with source=arp
	I1121 23:46:55.429939  251263 main.go:143] libmachine: unable to find current IP address of domain addons-266876 in network mk-addons-266876 (interfaces detected: [])
	I1121 23:46:55.429981  251263 retry.go:31] will retry after 1.78072773s: waiting for domain to come up
	I1121 23:46:57.213143  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:46:57.213941  251263 main.go:143] libmachine: no network interface addresses found for domain addons-266876 (source=lease)
	I1121 23:46:57.213961  251263 main.go:143] libmachine: trying to list again with source=arp
	I1121 23:46:57.214320  251263 main.go:143] libmachine: unable to find current IP address of domain addons-266876 in network mk-addons-266876 (interfaces detected: [])
	I1121 23:46:57.214355  251263 retry.go:31] will retry after 1.504173315s: waiting for domain to come up
	I1121 23:46:58.719849  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:46:58.720746  251263 main.go:143] libmachine: no network interface addresses found for domain addons-266876 (source=lease)
	I1121 23:46:58.720770  251263 main.go:143] libmachine: trying to list again with source=arp
	I1121 23:46:58.721137  251263 main.go:143] libmachine: unable to find current IP address of domain addons-266876 in network mk-addons-266876 (interfaces detected: [])
	I1121 23:46:58.721173  251263 retry.go:31] will retry after 2.875642747s: waiting for domain to come up
	I1121 23:47:01.600296  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:01.600945  251263 main.go:143] libmachine: no network interface addresses found for domain addons-266876 (source=lease)
	I1121 23:47:01.600961  251263 main.go:143] libmachine: trying to list again with source=arp
	I1121 23:47:01.601274  251263 main.go:143] libmachine: unable to find current IP address of domain addons-266876 in network mk-addons-266876 (interfaces detected: [])
	I1121 23:47:01.601321  251263 retry.go:31] will retry after 3.623260763s: waiting for domain to come up
	I1121 23:47:05.227711  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:05.228458  251263 main.go:143] libmachine: domain addons-266876 has current primary IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:05.228475  251263 main.go:143] libmachine: found domain IP: 192.168.39.50
	I1121 23:47:05.228486  251263 main.go:143] libmachine: reserving static IP address...
	I1121 23:47:05.229043  251263 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-266876", mac: "52:54:00:ab:5a:31", ip: "192.168.39.50"} in network mk-addons-266876
	I1121 23:47:05.530130  251263 main.go:143] libmachine: reserved static IP address 192.168.39.50 for domain addons-266876
	I1121 23:47:05.530160  251263 main.go:143] libmachine: waiting for SSH...
	I1121 23:47:05.530169  251263 main.go:143] libmachine: Getting to WaitForSSH function...
	I1121 23:47:05.533988  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:05.534529  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:05.534565  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:05.534795  251263 main.go:143] libmachine: Using SSH client type: native
	I1121 23:47:05.535088  251263 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I1121 23:47:05.535104  251263 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1121 23:47:05.657550  251263 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1121 23:47:05.657963  251263 main.go:143] libmachine: domain creation complete
	I1121 23:47:05.659772  251263 machine.go:94] provisionDockerMachine start ...
	I1121 23:47:05.662740  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:05.663237  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:05.663263  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:05.663525  251263 main.go:143] libmachine: Using SSH client type: native
	I1121 23:47:05.663805  251263 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I1121 23:47:05.663820  251263 main.go:143] libmachine: About to run SSH command:
	hostname
	I1121 23:47:05.773778  251263 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1121 23:47:05.773809  251263 buildroot.go:166] provisioning hostname "addons-266876"
	I1121 23:47:05.777397  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:05.777855  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:05.777881  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:05.778090  251263 main.go:143] libmachine: Using SSH client type: native
	I1121 23:47:05.778347  251263 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I1121 23:47:05.778362  251263 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-266876 && echo "addons-266876" | sudo tee /etc/hostname
	I1121 23:47:05.904549  251263 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-266876
	
	I1121 23:47:05.907947  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:05.908399  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:05.908428  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:05.908637  251263 main.go:143] libmachine: Using SSH client type: native
	I1121 23:47:05.908909  251263 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I1121 23:47:05.908934  251263 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-266876' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-266876/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-266876' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1121 23:47:06.027505  251263 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1121 23:47:06.027542  251263 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21934-244751/.minikube CaCertPath:/home/jenkins/minikube-integration/21934-244751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21934-244751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21934-244751/.minikube}
	I1121 23:47:06.027606  251263 buildroot.go:174] setting up certificates
	I1121 23:47:06.027620  251263 provision.go:84] configureAuth start
	I1121 23:47:06.030823  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:06.031234  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:06.031255  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:06.033405  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:06.033742  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:06.033761  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:06.033873  251263 provision.go:143] copyHostCerts
	I1121 23:47:06.033958  251263 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-244751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21934-244751/.minikube/key.pem (1679 bytes)
	I1121 23:47:06.034087  251263 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-244751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21934-244751/.minikube/ca.pem (1078 bytes)
	I1121 23:47:06.034147  251263 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-244751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21934-244751/.minikube/cert.pem (1123 bytes)
	I1121 23:47:06.034206  251263 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21934-244751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21934-244751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21934-244751/.minikube/certs/ca-key.pem org=jenkins.addons-266876 san=[127.0.0.1 192.168.39.50 addons-266876 localhost minikube]
	I1121 23:47:06.088178  251263 provision.go:177] copyRemoteCerts
	I1121 23:47:06.088255  251263 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1121 23:47:06.090836  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:06.091229  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:06.091259  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:06.091419  251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
	I1121 23:47:06.177697  251263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1121 23:47:06.208945  251263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1121 23:47:06.240002  251263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1121 23:47:06.271424  251263 provision.go:87] duration metric: took 243.786645ms to configureAuth
	I1121 23:47:06.271463  251263 buildroot.go:189] setting minikube options for container-runtime
	I1121 23:47:06.271718  251263 config.go:182] Loaded profile config "addons-266876": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:47:06.275170  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:06.275691  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:06.275730  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:06.276021  251263 main.go:143] libmachine: Using SSH client type: native
	I1121 23:47:06.276275  251263 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I1121 23:47:06.276292  251263 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1121 23:47:06.522993  251263 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1121 23:47:06.523024  251263 machine.go:97] duration metric: took 863.230308ms to provisionDockerMachine
	I1121 23:47:06.523034  251263 client.go:176] duration metric: took 18.337586387s to LocalClient.Create
	I1121 23:47:06.523056  251263 start.go:167] duration metric: took 18.337642424s to libmachine.API.Create "addons-266876"
	I1121 23:47:06.523067  251263 start.go:293] postStartSetup for "addons-266876" (driver="kvm2")
	I1121 23:47:06.523080  251263 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1121 23:47:06.523174  251263 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1121 23:47:06.526182  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:06.526662  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:06.526701  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:06.526857  251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
	I1121 23:47:06.616570  251263 ssh_runner.go:195] Run: cat /etc/os-release
	I1121 23:47:06.622182  251263 info.go:137] Remote host: Buildroot 2025.02
	I1121 23:47:06.622217  251263 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-244751/.minikube/addons for local assets ...
	I1121 23:47:06.622288  251263 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-244751/.minikube/files for local assets ...
	I1121 23:47:06.622311  251263 start.go:296] duration metric: took 99.238343ms for postStartSetup
	I1121 23:47:06.625431  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:06.626043  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:06.626079  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:06.626664  251263 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/config.json ...
	I1121 23:47:06.626937  251263 start.go:128] duration metric: took 18.44331085s to createHost
	I1121 23:47:06.629842  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:06.630374  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:06.630404  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:06.630671  251263 main.go:143] libmachine: Using SSH client type: native
	I1121 23:47:06.630883  251263 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I1121 23:47:06.630893  251263 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1121 23:47:06.742838  251263 main.go:143] libmachine: SSH cmd err, output: <nil>: 1763768826.701122136
	
	I1121 23:47:06.742869  251263 fix.go:216] guest clock: 1763768826.701122136
	I1121 23:47:06.742878  251263 fix.go:229] Guest: 2025-11-21 23:47:06.701122136 +0000 UTC Remote: 2025-11-21 23:47:06.626948375 +0000 UTC m=+18.545515405 (delta=74.173761ms)
	I1121 23:47:06.742897  251263 fix.go:200] guest clock delta is within tolerance: 74.173761ms
	I1121 23:47:06.742902  251263 start.go:83] releasing machines lock for "addons-266876", held for 18.559341059s
	I1121 23:47:06.745883  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:06.746295  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:06.746321  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:06.746833  251263 ssh_runner.go:195] Run: cat /version.json
	I1121 23:47:06.746947  251263 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1121 23:47:06.750243  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:06.750247  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:06.750776  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:06.750809  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:06.750823  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:06.750856  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:06.751031  251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
	I1121 23:47:06.751199  251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
	I1121 23:47:06.830906  251263 ssh_runner.go:195] Run: systemctl --version
	I1121 23:47:06.862977  251263 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1121 23:47:07.024839  251263 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1121 23:47:07.032647  251263 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1121 23:47:07.032771  251263 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1121 23:47:07.054527  251263 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1121 23:47:07.054564  251263 start.go:496] detecting cgroup driver to use...
	I1121 23:47:07.054645  251263 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1121 23:47:07.075688  251263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1121 23:47:07.094661  251263 docker.go:218] disabling cri-docker service (if available) ...
	I1121 23:47:07.094747  251263 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1121 23:47:07.112602  251263 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1121 23:47:07.129177  251263 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1121 23:47:07.274890  251263 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1121 23:47:07.492757  251263 docker.go:234] disabling docker service ...
	I1121 23:47:07.492831  251263 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1121 23:47:07.510021  251263 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1121 23:47:07.525620  251263 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1121 23:47:07.675935  251263 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1121 23:47:07.820400  251263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1121 23:47:07.837622  251263 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1121 23:47:07.861864  251263 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1121 23:47:07.861942  251263 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 23:47:07.875198  251263 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1121 23:47:07.875282  251263 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 23:47:07.889198  251263 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 23:47:07.902595  251263 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 23:47:07.915879  251263 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1121 23:47:07.929954  251263 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 23:47:07.943664  251263 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 23:47:07.965719  251263 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 23:47:07.978868  251263 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1121 23:47:07.991074  251263 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1121 23:47:07.991144  251263 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1121 23:47:08.015804  251263 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1121 23:47:08.029594  251263 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 23:47:08.172544  251263 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1121 23:47:08.286465  251263 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1121 23:47:08.286546  251263 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1121 23:47:08.292422  251263 start.go:564] Will wait 60s for crictl version
	I1121 23:47:08.292523  251263 ssh_runner.go:195] Run: which crictl
	I1121 23:47:08.297252  251263 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1121 23:47:08.333825  251263 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1121 23:47:08.333924  251263 ssh_runner.go:195] Run: crio --version
	I1121 23:47:08.364777  251263 ssh_runner.go:195] Run: crio --version
	I1121 23:47:08.397593  251263 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1121 23:47:08.401817  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:08.402315  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:08.402343  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:08.402614  251263 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1121 23:47:08.408058  251263 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 23:47:08.427560  251263 kubeadm.go:884] updating cluster {Name:addons-266876 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-266876 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.50 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1121 23:47:08.427708  251263 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 23:47:08.427752  251263 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 23:47:08.466046  251263 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1121 23:47:08.466131  251263 ssh_runner.go:195] Run: which lz4
	I1121 23:47:08.471268  251263 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1121 23:47:08.476699  251263 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1121 23:47:08.476733  251263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1121 23:47:10.046904  251263 crio.go:462] duration metric: took 1.575665951s to copy over tarball
	I1121 23:47:10.046997  251263 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1121 23:47:11.663077  251263 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.616046572s)
	I1121 23:47:11.663118  251263 crio.go:469] duration metric: took 1.616181048s to extract the tarball
	I1121 23:47:11.663129  251263 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1121 23:47:11.705893  251263 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 23:47:11.746467  251263 crio.go:514] all images are preloaded for cri-o runtime.
	I1121 23:47:11.746493  251263 cache_images.go:86] Images are preloaded, skipping loading
	I1121 23:47:11.746502  251263 kubeadm.go:935] updating node { 192.168.39.50 8443 v1.34.1 crio true true} ...
	I1121 23:47:11.746609  251263 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-266876 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.50
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-266876 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1121 23:47:11.746698  251263 ssh_runner.go:195] Run: crio config
	I1121 23:47:11.795708  251263 cni.go:84] Creating CNI manager for ""
	I1121 23:47:11.795739  251263 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1121 23:47:11.795759  251263 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1121 23:47:11.795781  251263 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.50 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-266876 NodeName:addons-266876 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.50"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.50 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1121 23:47:11.795901  251263 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.50
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-266876"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.50"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.50"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1121 23:47:11.795977  251263 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1121 23:47:11.808516  251263 binaries.go:51] Found k8s binaries, skipping transfer
	I1121 23:47:11.808581  251263 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1121 23:47:11.820622  251263 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1121 23:47:11.842831  251263 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1121 23:47:11.864556  251263 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1121 23:47:11.887018  251263 ssh_runner.go:195] Run: grep 192.168.39.50	control-plane.minikube.internal$ /etc/hosts
	I1121 23:47:11.891743  251263 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.50	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 23:47:11.907140  251263 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 23:47:12.050500  251263 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 23:47:12.084445  251263 certs.go:69] Setting up /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876 for IP: 192.168.39.50
	I1121 23:47:12.084477  251263 certs.go:195] generating shared ca certs ...
	I1121 23:47:12.084503  251263 certs.go:227] acquiring lock for ca certs: {Name:mk43fa762c6315605485300e07cf83f8f357f8dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:47:12.084733  251263 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21934-244751/.minikube/ca.key
	I1121 23:47:12.219080  251263 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-244751/.minikube/ca.crt ...
	I1121 23:47:12.219114  251263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-244751/.minikube/ca.crt: {Name:mk4ab860b5f00eeacc7d5a064e6b8682b8350cc0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:47:12.219328  251263 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-244751/.minikube/ca.key ...
	I1121 23:47:12.219350  251263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-244751/.minikube/ca.key: {Name:mkd33a6a072a0fb7cb39783adfcb9f792da25f35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:47:12.219466  251263 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21934-244751/.minikube/proxy-client-ca.key
	I1121 23:47:12.275894  251263 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-244751/.minikube/proxy-client-ca.crt ...
	I1121 23:47:12.275930  251263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-244751/.minikube/proxy-client-ca.crt: {Name:mk4874a4ae2a76e1a44a3b81a6402bcd1f4b9663 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:47:12.276126  251263 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-244751/.minikube/proxy-client-ca.key ...
	I1121 23:47:12.276145  251263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-244751/.minikube/proxy-client-ca.key: {Name:mk1d8c1db5a8f9f2ab09a6bc1211706c413d6bdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:47:12.276291  251263 certs.go:257] generating profile certs ...
	I1121 23:47:12.276376  251263 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/client.key
	I1121 23:47:12.276402  251263 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/client.crt with IP's: []
	I1121 23:47:12.405508  251263 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/client.crt ...
	I1121 23:47:12.405541  251263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/client.crt: {Name:mkcc0d2bdbfeba71ea1f4e63e41e1151d9d382ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:47:12.405791  251263 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/client.key ...
	I1121 23:47:12.405812  251263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/client.key: {Name:mk1d82213fc29dcec5419cdd18c321f7613a56e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:47:12.405953  251263 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/apiserver.key.8d7367ca
	I1121 23:47:12.405982  251263 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/apiserver.crt.8d7367ca with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.50]
	I1121 23:47:12.443135  251263 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/apiserver.crt.8d7367ca ...
	I1121 23:47:12.443162  251263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/apiserver.crt.8d7367ca: {Name:mk318161f2384c8556874dd6e6e5fc8eee5c9cdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:47:12.443363  251263 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/apiserver.key.8d7367ca ...
	I1121 23:47:12.443385  251263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/apiserver.key.8d7367ca: {Name:mke2fa439b03069f58550af68f202fe26e9c97ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:47:12.443489  251263 certs.go:382] copying /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/apiserver.crt.8d7367ca -> /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/apiserver.crt
	I1121 23:47:12.443595  251263 certs.go:386] copying /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/apiserver.key.8d7367ca -> /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/apiserver.key
	I1121 23:47:12.443670  251263 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/proxy-client.key
	I1121 23:47:12.443705  251263 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/proxy-client.crt with IP's: []
	I1121 23:47:12.603488  251263 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/proxy-client.crt ...
	I1121 23:47:12.603520  251263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/proxy-client.crt: {Name:mk795b280bcd9c59cf78ec03ece9d4b0753eaaa6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:47:12.603755  251263 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/proxy-client.key ...
	I1121 23:47:12.603779  251263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/proxy-client.key: {Name:mkfe4eecc4523b56c0d41272318c6e77ecb4dd52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:47:12.604032  251263 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-244751/.minikube/certs/ca-key.pem (1675 bytes)
	I1121 23:47:12.604112  251263 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-244751/.minikube/certs/ca.pem (1078 bytes)
	I1121 23:47:12.604152  251263 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-244751/.minikube/certs/cert.pem (1123 bytes)
	I1121 23:47:12.604194  251263 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-244751/.minikube/certs/key.pem (1679 bytes)
	I1121 23:47:12.604861  251263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1121 23:47:12.637531  251263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1121 23:47:12.669272  251263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1121 23:47:12.700033  251263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1121 23:47:12.730398  251263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1121 23:47:12.766760  251263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1121 23:47:12.814595  251263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1121 23:47:12.848615  251263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1121 23:47:12.879920  251263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1121 23:47:12.912022  251263 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1121 23:47:12.933857  251263 ssh_runner.go:195] Run: openssl version
	I1121 23:47:12.940506  251263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1121 23:47:12.953948  251263 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1121 23:47:12.959503  251263 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 23:47 /usr/share/ca-certificates/minikubeCA.pem
	I1121 23:47:12.959560  251263 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1121 23:47:12.967627  251263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1121 23:47:12.981398  251263 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1121 23:47:12.986879  251263 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1121 23:47:12.986957  251263 kubeadm.go:401] StartCluster: {Name:addons-266876 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-266876 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.50 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 23:47:12.987064  251263 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 23:47:12.987158  251263 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 23:47:13.025633  251263 cri.go:89] found id: ""
	I1121 23:47:13.025741  251263 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1121 23:47:13.038755  251263 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1121 23:47:13.052370  251263 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1121 23:47:13.065036  251263 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1121 23:47:13.065062  251263 kubeadm.go:158] found existing configuration files:
	
	I1121 23:47:13.065139  251263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1121 23:47:13.077032  251263 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1121 23:47:13.077097  251263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1121 23:47:13.090073  251263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1121 23:47:13.101398  251263 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1121 23:47:13.101465  251263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1121 23:47:13.114396  251263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1121 23:47:13.126235  251263 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1121 23:47:13.126304  251263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1121 23:47:13.139694  251263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1121 23:47:13.151819  251263 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1121 23:47:13.151882  251263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1121 23:47:13.164512  251263 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1121 23:47:13.226756  251263 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1121 23:47:13.226832  251263 kubeadm.go:319] [preflight] Running pre-flight checks
	I1121 23:47:13.345339  251263 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1121 23:47:13.345491  251263 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1121 23:47:13.345647  251263 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1121 23:47:13.359341  251263 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1121 23:47:13.436841  251263 out.go:252]   - Generating certificates and keys ...
	I1121 23:47:13.437031  251263 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1121 23:47:13.437171  251263 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1121 23:47:13.558105  251263 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1121 23:47:13.651102  251263 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1121 23:47:13.902476  251263 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1121 23:47:14.134826  251263 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1121 23:47:14.345459  251263 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1121 23:47:14.345645  251263 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-266876 localhost] and IPs [192.168.39.50 127.0.0.1 ::1]
	I1121 23:47:14.583497  251263 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1121 23:47:14.583717  251263 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-266876 localhost] and IPs [192.168.39.50 127.0.0.1 ::1]
	I1121 23:47:14.931062  251263 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1121 23:47:15.434495  251263 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1121 23:47:15.838983  251263 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1121 23:47:15.839096  251263 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1121 23:47:15.963541  251263 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1121 23:47:16.269311  251263 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1121 23:47:16.929016  251263 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1121 23:47:17.056928  251263 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1121 23:47:17.384976  251263 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1121 23:47:17.385309  251263 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1121 23:47:17.387510  251263 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1121 23:47:17.389626  251263 out.go:252]   - Booting up control plane ...
	I1121 23:47:17.389730  251263 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1121 23:47:17.389802  251263 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1121 23:47:17.389859  251263 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1121 23:47:17.408245  251263 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1121 23:47:17.408393  251263 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1121 23:47:17.416098  251263 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1121 23:47:17.416463  251263 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1121 23:47:17.416528  251263 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1121 23:47:17.572061  251263 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1121 23:47:17.572273  251263 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1121 23:47:18.575810  251263 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.003449114s
	I1121 23:47:18.581453  251263 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1121 23:47:18.581592  251263 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.50:8443/livez
	I1121 23:47:18.581745  251263 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1121 23:47:18.581872  251263 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1121 23:47:21.444953  251263 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.865438426s
	I1121 23:47:22.473854  251263 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.895647364s
	I1121 23:47:24.581213  251263 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.003558147s
	I1121 23:47:24.600634  251263 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1121 23:47:24.621062  251263 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1121 23:47:24.638002  251263 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1121 23:47:24.638263  251263 kubeadm.go:319] [mark-control-plane] Marking the node addons-266876 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1121 23:47:24.652039  251263 kubeadm.go:319] [bootstrap-token] Using token: grn95n.s74ahx9w73uu3ca1
	I1121 23:47:24.653732  251263 out.go:252]   - Configuring RBAC rules ...
	I1121 23:47:24.653880  251263 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1121 23:47:24.659155  251263 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1121 23:47:24.672314  251263 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1121 23:47:24.680496  251263 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1121 23:47:24.684483  251263 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1121 23:47:24.688905  251263 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1121 23:47:24.990519  251263 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1121 23:47:25.446692  251263 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1121 23:47:25.987142  251263 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1121 23:47:25.988495  251263 kubeadm.go:319] 
	I1121 23:47:25.988586  251263 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1121 23:47:25.988628  251263 kubeadm.go:319] 
	I1121 23:47:25.988755  251263 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1121 23:47:25.988774  251263 kubeadm.go:319] 
	I1121 23:47:25.988799  251263 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1121 23:47:25.988879  251263 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1121 23:47:25.988970  251263 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1121 23:47:25.988990  251263 kubeadm.go:319] 
	I1121 23:47:25.989051  251263 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1121 23:47:25.989061  251263 kubeadm.go:319] 
	I1121 23:47:25.989146  251263 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1121 23:47:25.989158  251263 kubeadm.go:319] 
	I1121 23:47:25.989248  251263 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1121 23:47:25.989366  251263 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1121 23:47:25.989475  251263 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1121 23:47:25.989488  251263 kubeadm.go:319] 
	I1121 23:47:25.989602  251263 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1121 23:47:25.989728  251263 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1121 23:47:25.989738  251263 kubeadm.go:319] 
	I1121 23:47:25.989856  251263 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token grn95n.s74ahx9w73uu3ca1 \
	I1121 23:47:25.990007  251263 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7035eabdb6dc9c299f99d6120e0649f8a13de0412ab5d63e88dba6debc1b302c \
	I1121 23:47:25.990049  251263 kubeadm.go:319] 	--control-plane 
	I1121 23:47:25.990057  251263 kubeadm.go:319] 
	I1121 23:47:25.990176  251263 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1121 23:47:25.990186  251263 kubeadm.go:319] 
	I1121 23:47:25.990300  251263 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token grn95n.s74ahx9w73uu3ca1 \
	I1121 23:47:25.990438  251263 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7035eabdb6dc9c299f99d6120e0649f8a13de0412ab5d63e88dba6debc1b302c 
	I1121 23:47:25.992560  251263 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1121 23:47:25.992602  251263 cni.go:84] Creating CNI manager for ""
	I1121 23:47:25.992623  251263 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1121 23:47:25.994543  251263 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1121 23:47:25.996106  251263 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1121 23:47:26.010555  251263 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1121 23:47:26.033834  251263 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1121 23:47:26.033972  251263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 23:47:26.033980  251263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-266876 minikube.k8s.io/updated_at=2025_11_21T23_47_26_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785 minikube.k8s.io/name=addons-266876 minikube.k8s.io/primary=true
	I1121 23:47:26.084057  251263 ops.go:34] apiserver oom_adj: -16
	I1121 23:47:26.203325  251263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 23:47:26.704291  251263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 23:47:27.204057  251263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 23:47:27.704402  251263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 23:47:28.204383  251263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 23:47:28.704103  251263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 23:47:29.204400  251263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 23:47:29.704060  251263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 23:47:30.204340  251263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 23:47:30.314187  251263 kubeadm.go:1114] duration metric: took 4.280316282s to wait for elevateKubeSystemPrivileges
	I1121 23:47:30.314239  251263 kubeadm.go:403] duration metric: took 17.327291456s to StartCluster
	I1121 23:47:30.314270  251263 settings.go:142] acquiring lock: {Name:mkd124ec98418d6d2386a8f1a0e2e5ff6f0f99d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:47:30.314449  251263 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21934-244751/kubeconfig
	I1121 23:47:30.314952  251263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-244751/kubeconfig: {Name:mkbde37dbfe874aace118914fefd91b607e3afff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:47:30.315195  251263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1121 23:47:30.315224  251263 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.50 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1121 23:47:30.315300  251263 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1121 23:47:30.315425  251263 addons.go:70] Setting yakd=true in profile "addons-266876"
	I1121 23:47:30.315450  251263 addons.go:239] Setting addon yakd=true in "addons-266876"
	I1121 23:47:30.315462  251263 addons.go:70] Setting inspektor-gadget=true in profile "addons-266876"
	I1121 23:47:30.315485  251263 host.go:66] Checking if "addons-266876" exists ...
	I1121 23:47:30.315491  251263 addons.go:239] Setting addon inspektor-gadget=true in "addons-266876"
	I1121 23:47:30.315501  251263 addons.go:70] Setting default-storageclass=true in profile "addons-266876"
	I1121 23:47:30.315529  251263 host.go:66] Checking if "addons-266876" exists ...
	I1121 23:47:30.315528  251263 config.go:182] Loaded profile config "addons-266876": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:47:30.315544  251263 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-266876"
	I1121 23:47:30.315569  251263 addons.go:70] Setting cloud-spanner=true in profile "addons-266876"
	I1121 23:47:30.315601  251263 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-266876"
	I1121 23:47:30.315604  251263 addons.go:70] Setting registry-creds=true in profile "addons-266876"
	I1121 23:47:30.315608  251263 addons.go:239] Setting addon cloud-spanner=true in "addons-266876"
	I1121 23:47:30.315620  251263 addons.go:239] Setting addon registry-creds=true in "addons-266876"
	I1121 23:47:30.315642  251263 host.go:66] Checking if "addons-266876" exists ...
	I1121 23:47:30.315644  251263 host.go:66] Checking if "addons-266876" exists ...
	I1121 23:47:30.315644  251263 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-266876"
	I1121 23:47:30.315691  251263 host.go:66] Checking if "addons-266876" exists ...
	I1121 23:47:30.315903  251263 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-266876"
	I1121 23:47:30.315921  251263 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-266876"
	I1121 23:47:30.315947  251263 host.go:66] Checking if "addons-266876" exists ...
	I1121 23:47:30.316235  251263 addons.go:70] Setting ingress=true in profile "addons-266876"
	I1121 23:47:30.316274  251263 addons.go:239] Setting addon ingress=true in "addons-266876"
	I1121 23:47:30.316310  251263 host.go:66] Checking if "addons-266876" exists ...
	I1121 23:47:30.316663  251263 addons.go:70] Setting registry=true in profile "addons-266876"
	I1121 23:47:30.316697  251263 addons.go:239] Setting addon registry=true in "addons-266876"
	I1121 23:47:30.316723  251263 host.go:66] Checking if "addons-266876" exists ...
	I1121 23:47:30.317068  251263 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-266876"
	I1121 23:47:30.317089  251263 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-266876"
	I1121 23:47:30.317115  251263 host.go:66] Checking if "addons-266876" exists ...
	I1121 23:47:30.317160  251263 addons.go:70] Setting gcp-auth=true in profile "addons-266876"
	I1121 23:47:30.315588  251263 addons.go:70] Setting ingress-dns=true in profile "addons-266876"
	I1121 23:47:30.317206  251263 mustload.go:66] Loading cluster: addons-266876
	I1121 23:47:30.317231  251263 addons.go:239] Setting addon ingress-dns=true in "addons-266876"
	I1121 23:47:30.317253  251263 addons.go:70] Setting metrics-server=true in profile "addons-266876"
	I1121 23:47:30.317268  251263 host.go:66] Checking if "addons-266876" exists ...
	I1121 23:47:30.317272  251263 addons.go:239] Setting addon metrics-server=true in "addons-266876"
	I1121 23:47:30.317299  251263 host.go:66] Checking if "addons-266876" exists ...
	I1121 23:47:30.317400  251263 config.go:182] Loaded profile config "addons-266876": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:47:30.317441  251263 addons.go:70] Setting storage-provisioner=true in profile "addons-266876"
	I1121 23:47:30.317460  251263 addons.go:239] Setting addon storage-provisioner=true in "addons-266876"
	I1121 23:47:30.317490  251263 host.go:66] Checking if "addons-266876" exists ...
	I1121 23:47:30.317944  251263 addons.go:70] Setting volcano=true in profile "addons-266876"
	I1121 23:47:30.317973  251263 addons.go:239] Setting addon volcano=true in "addons-266876"
	I1121 23:47:30.318000  251263 host.go:66] Checking if "addons-266876" exists ...
	I1121 23:47:30.318181  251263 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-266876"
	I1121 23:47:30.318207  251263 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-266876"
	I1121 23:47:30.318457  251263 addons.go:70] Setting volumesnapshots=true in profile "addons-266876"
	I1121 23:47:30.318489  251263 addons.go:239] Setting addon volumesnapshots=true in "addons-266876"
	I1121 23:47:30.318514  251263 host.go:66] Checking if "addons-266876" exists ...
	I1121 23:47:30.318636  251263 out.go:179] * Verifying Kubernetes components...
	I1121 23:47:30.321872  251263 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 23:47:30.323979  251263 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1121 23:47:30.324015  251263 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1121 23:47:30.324059  251263 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.43
	I1121 23:47:30.324308  251263 addons.go:239] Setting addon default-storageclass=true in "addons-266876"
	I1121 23:47:30.324852  251263 host.go:66] Checking if "addons-266876" exists ...
	I1121 23:47:30.325430  251263 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1121 23:47:30.325460  251263 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1121 23:47:30.325834  251263 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1121 23:47:30.325536  251263 host.go:66] Checking if "addons-266876" exists ...
	I1121 23:47:30.326179  251263 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1121 23:47:30.326187  251263 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1121 23:47:30.326317  251263 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1121 23:47:30.326336  251263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1121 23:47:30.326936  251263 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1121 23:47:30.326998  251263 out.go:179]   - Using image docker.io/registry:3.0.0
	I1121 23:47:30.326980  251263 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1121 23:47:30.327044  251263 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1121 23:47:30.327543  251263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	W1121 23:47:30.327112  251263 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1121 23:47:30.327823  251263 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1121 23:47:30.327894  251263 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1121 23:47:30.328316  251263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1121 23:47:30.327908  251263 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1121 23:47:30.327937  251263 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1121 23:47:30.328129  251263 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-266876"
	I1121 23:47:30.328994  251263 host.go:66] Checking if "addons-266876" exists ...
	I1121 23:47:30.328605  251263 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1121 23:47:30.328665  251263 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 23:47:30.328694  251263 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1121 23:47:30.330248  251263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1121 23:47:30.329173  251263 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1121 23:47:30.330310  251263 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1121 23:47:30.330603  251263 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1121 23:47:30.330604  251263 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1121 23:47:30.331083  251263 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1121 23:47:30.330604  251263 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1121 23:47:30.330630  251263 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1121 23:47:30.331264  251263 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1121 23:47:30.330646  251263 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1121 23:47:30.330654  251263 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1121 23:47:30.331990  251263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1121 23:47:30.330703  251263 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1121 23:47:30.332116  251263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1121 23:47:30.331545  251263 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 23:47:30.332194  251263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1121 23:47:30.332542  251263 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1121 23:47:30.332882  251263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1121 23:47:30.334102  251263 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1121 23:47:30.334436  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.335240  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.335327  251263 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1121 23:47:30.335355  251263 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1121 23:47:30.336111  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:30.336119  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.336147  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.336581  251263 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1121 23:47:30.336829  251263 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1121 23:47:30.336847  251263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1121 23:47:30.336857  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:30.336898  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.336963  251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
	I1121 23:47:30.337875  251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
	I1121 23:47:30.337944  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:30.337986  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.338791  251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
	I1121 23:47:30.338889  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.339032  251263 out.go:179]   - Using image docker.io/busybox:stable
	I1121 23:47:30.339781  251263 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1121 23:47:30.340483  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:30.340514  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.340666  251263 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1121 23:47:30.340695  251263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1121 23:47:30.340797  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.341117  251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
	I1121 23:47:30.341357  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.342122  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.342189  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:30.342220  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.342778  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:30.342795  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.342811  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.342975  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.343022  251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
	I1121 23:47:30.343206  251263 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1121 23:47:30.343363  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.343504  251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
	I1121 23:47:30.343566  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:30.343596  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.344162  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.344636  251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
	I1121 23:47:30.344648  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:30.344718  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.344930  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:30.344977  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.345068  251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
	I1121 23:47:30.345337  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:30.345379  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.345381  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.345342  251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
	I1121 23:47:30.345569  251263 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1121 23:47:30.345654  251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
	I1121 23:47:30.346248  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:30.346289  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.346396  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:30.346427  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.346508  251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
	I1121 23:47:30.346706  251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
	I1121 23:47:30.346995  251263 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1121 23:47:30.347011  251263 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1121 23:47:30.347328  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.347842  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:30.347873  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.348042  251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
	I1121 23:47:30.348168  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.348658  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:30.348696  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.348924  251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
	I1121 23:47:30.349955  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.350423  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:30.350455  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.350644  251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
	W1121 23:47:30.571554  251263 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:53184->192.168.39.50:22: read: connection reset by peer
	I1121 23:47:30.571604  251263 retry.go:31] will retry after 237.893493ms: ssh: handshake failed: read tcp 192.168.39.1:53184->192.168.39.50:22: read: connection reset by peer
	W1121 23:47:30.594670  251263 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:53214->192.168.39.50:22: read: connection reset by peer
	I1121 23:47:30.594718  251263 retry.go:31] will retry after 219.796697ms: ssh: handshake failed: read tcp 192.168.39.1:53214->192.168.39.50:22: read: connection reset by peer
	W1121 23:47:30.648821  251263 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:53232->192.168.39.50:22: read: connection reset by peer
	I1121 23:47:30.648855  251263 retry.go:31] will retry after 280.923937ms: ssh: handshake failed: read tcp 192.168.39.1:53232->192.168.39.50:22: read: connection reset by peer
	I1121 23:47:30.906273  251263 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 23:47:30.906343  251263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1121 23:47:31.303471  251263 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1121 23:47:31.303497  251263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1121 23:47:31.303519  251263 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1121 23:47:31.329075  251263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1121 23:47:31.372362  251263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1121 23:47:31.401245  251263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1121 23:47:31.443583  251263 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1121 23:47:31.443617  251263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1121 23:47:31.448834  251263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1121 23:47:31.496006  251263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1121 23:47:31.498539  251263 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1121 23:47:31.498563  251263 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1121 23:47:31.569835  251263 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1121 23:47:31.569869  251263 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1121 23:47:31.572494  251263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1121 23:47:31.624422  251263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1121 23:47:31.627643  251263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 23:47:31.900562  251263 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1121 23:47:31.900602  251263 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1121 23:47:32.010439  251263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1121 23:47:32.024813  251263 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1121 23:47:32.024876  251263 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1121 23:47:32.170850  251263 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1121 23:47:32.170888  251263 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1121 23:47:32.219733  251263 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1121 23:47:32.219791  251263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1121 23:47:32.404951  251263 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1121 23:47:32.404996  251263 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1121 23:47:32.544216  251263 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1121 23:47:32.544253  251263 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1121 23:47:32.578250  251263 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1121 23:47:32.578284  251263 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1121 23:47:32.653254  251263 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1121 23:47:32.653285  251263 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1121 23:47:32.741481  251263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1121 23:47:32.794874  251263 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1121 23:47:32.794909  251263 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1121 23:47:32.881148  251263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1121 23:47:33.067639  251263 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1121 23:47:33.067700  251263 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1121 23:47:33.067715  251263 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1121 23:47:33.067738  251263 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1121 23:47:33.271805  251263 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1121 23:47:33.271834  251263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1121 23:47:33.312325  251263 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1121 23:47:33.312356  251263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1121 23:47:33.436072  251263 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1121 23:47:33.436107  251263 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1121 23:47:33.708500  251263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1121 23:47:33.708927  251263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1121 23:47:34.040431  251263 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1121 23:47:34.040474  251263 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1121 23:47:34.408465  251263 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.502153253s)
	I1121 23:47:34.408519  251263 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.502134143s)
	I1121 23:47:34.408554  251263 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1121 23:47:34.408578  251263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.105046996s)
	I1121 23:47:34.409219  251263 node_ready.go:35] waiting up to 6m0s for node "addons-266876" to be "Ready" ...
	I1121 23:47:34.415213  251263 node_ready.go:49] node "addons-266876" is "Ready"
	I1121 23:47:34.415248  251263 node_ready.go:38] duration metric: took 6.005684ms for node "addons-266876" to be "Ready" ...
	I1121 23:47:34.415268  251263 api_server.go:52] waiting for apiserver process to appear ...
	I1121 23:47:34.415324  251263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 23:47:34.664082  251263 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1121 23:47:34.664113  251263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1121 23:47:34.918427  251263 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-266876" context rescaled to 1 replicas
	I1121 23:47:35.149255  251263 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1121 23:47:35.149293  251263 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1121 23:47:35.732395  251263 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1121 23:47:35.732425  251263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1121 23:47:36.406188  251263 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1121 23:47:36.406216  251263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1121 23:47:36.897571  251263 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1121 23:47:36.897608  251263 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1121 23:47:37.313754  251263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1121 23:47:37.790744  251263 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1121 23:47:37.793928  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:37.794570  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:37.794603  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:37.794806  251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
	I1121 23:47:38.530200  251263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (7.201079248s)
	I1121 23:47:38.530311  251263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.15790373s)
	I1121 23:47:38.530349  251263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.129067228s)
	I1121 23:47:38.530410  251263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (7.081551551s)
	I1121 23:47:38.530485  251263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.034438414s)
	I1121 23:47:38.530531  251263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.958009964s)
	I1121 23:47:38.530576  251263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (6.90611639s)
	I1121 23:47:38.530688  251263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.902998512s)
	W1121 23:47:38.596091  251263 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1121 23:47:38.696471  251263 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1121 23:47:39.049239  251263 addons.go:239] Setting addon gcp-auth=true in "addons-266876"
	I1121 23:47:39.049319  251263 host.go:66] Checking if "addons-266876" exists ...
	I1121 23:47:39.051589  251263 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1121 23:47:39.054431  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:39.054905  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:39.054946  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:39.055124  251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
	I1121 23:47:40.911949  251263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.901459816s)
	I1121 23:47:40.912003  251263 addons.go:495] Verifying addon ingress=true in "addons-266876"
	I1121 23:47:40.912027  251263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.170505015s)
	I1121 23:47:40.912060  251263 addons.go:495] Verifying addon registry=true in "addons-266876"
	I1121 23:47:40.912106  251263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.030918863s)
	I1121 23:47:40.912208  251263 addons.go:495] Verifying addon metrics-server=true in "addons-266876"
	I1121 23:47:40.913759  251263 out.go:179] * Verifying ingress addon...
	I1121 23:47:40.913769  251263 out.go:179] * Verifying registry addon...
	I1121 23:47:40.916006  251263 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1121 23:47:40.916028  251263 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1121 23:47:41.040220  251263 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1121 23:47:41.040250  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:41.043403  251263 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1121 23:47:41.043428  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:41.261875  251263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.5533177s)
	W1121 23:47:41.261945  251263 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1121 23:47:41.261983  251263 retry.go:31] will retry after 128.365697ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1121 23:47:41.262010  251263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.553035838s)
	I1121 23:47:41.262077  251263 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (6.846726255s)
	I1121 23:47:41.262115  251263 api_server.go:72] duration metric: took 10.946861397s to wait for apiserver process to appear ...
	I1121 23:47:41.262194  251263 api_server.go:88] waiting for apiserver healthz status ...
	I1121 23:47:41.262220  251263 api_server.go:253] Checking apiserver healthz at https://192.168.39.50:8443/healthz ...
	I1121 23:47:41.263907  251263 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-266876 service yakd-dashboard -n yakd-dashboard
	
	I1121 23:47:41.282742  251263 api_server.go:279] https://192.168.39.50:8443/healthz returned 200:
	ok
	I1121 23:47:41.287497  251263 api_server.go:141] control plane version: v1.34.1
	I1121 23:47:41.287535  251263 api_server.go:131] duration metric: took 25.332513ms to wait for apiserver health ...
	I1121 23:47:41.287548  251263 system_pods.go:43] waiting for kube-system pods to appear ...
	I1121 23:47:41.306603  251263 system_pods.go:59] 16 kube-system pods found
	I1121 23:47:41.306658  251263 system_pods.go:61] "amd-gpu-device-plugin-pd4sx" [88fffae7-a3c2-46ef-a382-867c1f45dd2f] Running
	I1121 23:47:41.306672  251263 system_pods.go:61] "coredns-66bc5c9577-kmf4p" [c2dae9ee-3a8e-4c5c-9880-256aae9475c1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 23:47:41.306696  251263 system_pods.go:61] "coredns-66bc5c9577-tgk67" [ad56ae13-a7c4-44e3-a817-73aa300110b6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 23:47:41.306706  251263 system_pods.go:61] "etcd-addons-266876" [6329b2aa-df0d-4707-8094-52ed6a9b70fa] Running
	I1121 23:47:41.306714  251263 system_pods.go:61] "kube-apiserver-addons-266876" [d6500ed5-1e8a-40e7-8761-ce5d9b817580] Running
	I1121 23:47:41.306720  251263 system_pods.go:61] "kube-controller-manager-addons-266876" [9afdbaac-04de-4ae4-a1a5-ab74382c1ee4] Running
	I1121 23:47:41.306728  251263 system_pods.go:61] "kube-ingress-dns-minikube" [4c8445af-f050-4525-a580-c6cb45567d21] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1121 23:47:41.306737  251263 system_pods.go:61] "kube-proxy-d6jsf" [8c9f1dbf-19b7-4f19-8f33-11b6886f1237] Running
	I1121 23:47:41.306742  251263 system_pods.go:61] "kube-scheduler-addons-266876" [c367cb26-5fd3-4d41-8752-d8b7d6fe6c13] Running
	I1121 23:47:41.306749  251263 system_pods.go:61] "metrics-server-85b7d694d7-tcd7p" [8b9a51ce-61d0-430c-98de-9174d78d47d6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1121 23:47:41.306759  251263 system_pods.go:61] "nvidia-device-plugin-daemonset-6fx49" [4603aca8-96ff-429c-870e-aaa1a7987b07] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1121 23:47:41.306768  251263 system_pods.go:61] "registry-6b586f9694-g5tcd" [5c882c89-9d82-4657-b7a0-d20f145866ab] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1121 23:47:41.306780  251263 system_pods.go:61] "registry-creds-764b6fb674-c6k42" [b80bedd0-b303-4ba0-9c40-f2fd2464333c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1121 23:47:41.306789  251263 system_pods.go:61] "registry-proxy-xjdbm" [9ce7be2d-00fc-42ac-8617-38e2d4ecac77] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1121 23:47:41.306795  251263 system_pods.go:61] "snapshot-controller-7d9fbc56b8-r57wx" [136bb70d-9950-46db-83d9-09b543dc4f72] Pending
	I1121 23:47:41.306803  251263 system_pods.go:61] "storage-provisioner" [2855a3de-b990-447c-b094-274b5becf1da] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 23:47:41.306812  251263 system_pods.go:74] duration metric: took 19.257263ms to wait for pod list to return data ...
	I1121 23:47:41.306823  251263 default_sa.go:34] waiting for default service account to be created ...
	I1121 23:47:41.323263  251263 default_sa.go:45] found service account: "default"
	I1121 23:47:41.323302  251263 default_sa.go:55] duration metric: took 16.457401ms for default service account to be created ...
	I1121 23:47:41.323317  251263 system_pods.go:116] waiting for k8s-apps to be running ...
	I1121 23:47:41.337749  251263 system_pods.go:86] 17 kube-system pods found
	I1121 23:47:41.337783  251263 system_pods.go:89] "amd-gpu-device-plugin-pd4sx" [88fffae7-a3c2-46ef-a382-867c1f45dd2f] Running
	I1121 23:47:41.337791  251263 system_pods.go:89] "coredns-66bc5c9577-kmf4p" [c2dae9ee-3a8e-4c5c-9880-256aae9475c1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 23:47:41.337797  251263 system_pods.go:89] "coredns-66bc5c9577-tgk67" [ad56ae13-a7c4-44e3-a817-73aa300110b6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 23:47:41.337803  251263 system_pods.go:89] "etcd-addons-266876" [6329b2aa-df0d-4707-8094-52ed6a9b70fa] Running
	I1121 23:47:41.337808  251263 system_pods.go:89] "kube-apiserver-addons-266876" [d6500ed5-1e8a-40e7-8761-ce5d9b817580] Running
	I1121 23:47:41.337812  251263 system_pods.go:89] "kube-controller-manager-addons-266876" [9afdbaac-04de-4ae4-a1a5-ab74382c1ee4] Running
	I1121 23:47:41.337817  251263 system_pods.go:89] "kube-ingress-dns-minikube" [4c8445af-f050-4525-a580-c6cb45567d21] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1121 23:47:41.337821  251263 system_pods.go:89] "kube-proxy-d6jsf" [8c9f1dbf-19b7-4f19-8f33-11b6886f1237] Running
	I1121 23:47:41.337826  251263 system_pods.go:89] "kube-scheduler-addons-266876" [c367cb26-5fd3-4d41-8752-d8b7d6fe6c13] Running
	I1121 23:47:41.337831  251263 system_pods.go:89] "metrics-server-85b7d694d7-tcd7p" [8b9a51ce-61d0-430c-98de-9174d78d47d6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1121 23:47:41.337839  251263 system_pods.go:89] "nvidia-device-plugin-daemonset-6fx49" [4603aca8-96ff-429c-870e-aaa1a7987b07] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1121 23:47:41.337844  251263 system_pods.go:89] "registry-6b586f9694-g5tcd" [5c882c89-9d82-4657-b7a0-d20f145866ab] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1121 23:47:41.337849  251263 system_pods.go:89] "registry-creds-764b6fb674-c6k42" [b80bedd0-b303-4ba0-9c40-f2fd2464333c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1121 23:47:41.337854  251263 system_pods.go:89] "registry-proxy-xjdbm" [9ce7be2d-00fc-42ac-8617-38e2d4ecac77] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1121 23:47:41.337876  251263 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gcprx" [38cf49f5-ed6e-4aa5-bdfe-2494e5763f39] Pending
	I1121 23:47:41.337881  251263 system_pods.go:89] "snapshot-controller-7d9fbc56b8-r57wx" [136bb70d-9950-46db-83d9-09b543dc4f72] Pending
	I1121 23:47:41.337885  251263 system_pods.go:89] "storage-provisioner" [2855a3de-b990-447c-b094-274b5becf1da] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 23:47:41.337897  251263 system_pods.go:126] duration metric: took 14.572276ms to wait for k8s-apps to be running ...
	I1121 23:47:41.337909  251263 system_svc.go:44] waiting for kubelet service to be running ....
	I1121 23:47:41.337964  251263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 23:47:41.391055  251263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1121 23:47:41.444001  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:41.452955  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:41.927933  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:41.929997  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:42.455799  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:42.455860  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:42.926969  251263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.613140073s)
	I1121 23:47:42.927027  251263 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-266876"
	I1121 23:47:42.927049  251263 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.875424504s)
	I1121 23:47:42.927114  251263 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.589124511s)
	I1121 23:47:42.927233  251263 system_svc.go:56] duration metric: took 1.589318384s WaitForService to wait for kubelet
	I1121 23:47:42.927248  251263 kubeadm.go:587] duration metric: took 12.611994145s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 23:47:42.927275  251263 node_conditions.go:102] verifying NodePressure condition ...
	I1121 23:47:42.928903  251263 out.go:179] * Verifying csi-hostpath-driver addon...
	I1121 23:47:42.928918  251263 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1121 23:47:42.930225  251263 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1121 23:47:42.930998  251263 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1121 23:47:42.931460  251263 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1121 23:47:42.931483  251263 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1121 23:47:42.948957  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:42.956545  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:42.972599  251263 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1121 23:47:42.972629  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:42.991010  251263 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1121 23:47:42.991043  251263 node_conditions.go:123] node cpu capacity is 2
	I1121 23:47:42.991060  251263 node_conditions.go:105] duration metric: took 63.779822ms to run NodePressure ...
	I1121 23:47:42.991073  251263 start.go:242] waiting for startup goroutines ...
	I1121 23:47:43.000454  251263 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1121 23:47:43.000488  251263 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1121 23:47:43.064083  251263 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1121 23:47:43.064114  251263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1121 23:47:43.143418  251263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1121 23:47:43.424997  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:43.428350  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:43.438981  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:43.744014  251263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.352903636s)
	I1121 23:47:43.926051  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:43.926403  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:43.939557  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:44.470136  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:44.470507  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:44.470583  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:44.610973  251263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.467509011s)
	I1121 23:47:44.612084  251263 addons.go:495] Verifying addon gcp-auth=true in "addons-266876"
	I1121 23:47:44.614664  251263 out.go:179] * Verifying gcp-auth addon...
	I1121 23:47:44.617037  251263 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1121 23:47:44.679516  251263 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1121 23:47:44.679539  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:44.938585  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:44.939917  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:44.945173  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:45.125511  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:45.423184  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:45.424380  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:45.438459  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:45.621893  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:45.929603  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:45.933258  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:45.938917  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:46.123924  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:46.423081  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:46.425799  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:46.437310  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:46.623291  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:46.925943  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:46.926661  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:46.940308  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:47.120567  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:47.421527  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:47.422825  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:47.435356  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:47.622778  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:47.922908  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:47.925722  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:47.937113  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:48.122097  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:48.423467  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:48.423610  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:48.435064  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:48.622264  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:48.926889  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:48.926907  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:48.935809  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:49.124186  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:49.424165  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:49.424235  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:49.436947  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:49.623380  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:49.926485  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:49.926568  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:49.934726  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:50.149039  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:50.426766  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:50.427550  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:50.435800  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:50.623645  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:50.923166  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:50.924899  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:50.937932  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:51.120970  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:51.422946  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:51.423964  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:51.437143  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:51.623848  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:51.924227  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:51.929471  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:51.939629  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:52.261854  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:52.424962  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:52.428597  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:52.436986  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:52.622910  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:52.922271  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:52.924973  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:52.938365  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:53.121701  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:53.425753  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:53.438148  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:53.440564  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:53.709895  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:53.929068  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:53.931342  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:53.938714  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:54.122158  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:54.425360  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:54.428330  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:54.435907  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:54.623125  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:54.926160  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:54.926269  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:54.934959  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:55.123657  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:55.422851  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:55.423292  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:55.436852  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:55.621782  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:56.184531  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:56.185319  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:56.185351  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:56.185436  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:56.422356  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:56.422605  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:56.437477  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:56.621926  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:56.920916  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:56.921374  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:56.935238  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:57.120293  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:57.422033  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:57.424320  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:57.435388  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:57.621432  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:57.920963  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:57.924452  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:57.935839  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:58.121584  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:58.425091  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:58.425156  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:58.435426  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:58.635444  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:58.922739  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:58.923871  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:58.936112  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:59.123863  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:59.426020  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:59.430811  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:59.438808  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:59.623106  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:59.931900  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:59.936038  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:59.937959  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:00.122854  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:00.422993  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:00.424741  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:00.436196  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:00.620554  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:00.921652  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:00.922569  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:00.935087  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:01.123823  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:01.423850  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:01.425512  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:01.434928  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:01.621491  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:01.923505  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:01.924905  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:01.937201  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:02.121624  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:02.423602  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:02.423787  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:02.435107  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:02.620510  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:02.919996  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:02.921258  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:02.934427  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:03.121234  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:03.422602  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:03.422661  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:03.435654  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:03.627887  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:03.923184  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:03.923492  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:03.943565  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:04.122960  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:04.421986  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:04.422381  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:04.435361  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:04.623019  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:04.923848  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:04.925058  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:04.935882  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:05.121708  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:05.421718  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:05.421805  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:05.434879  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:05.622686  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:05.922353  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:05.923753  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:05.936216  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:06.120868  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:06.423712  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:06.423899  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:06.439806  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:06.625663  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:06.922260  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:06.922652  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:06.936062  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:07.121430  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:07.424027  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:07.424073  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:07.435511  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:07.622294  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:07.921125  251263 kapi.go:107] duration metric: took 27.005089483s to wait for kubernetes.io/minikube-addons=registry ...
	I1121 23:48:07.923396  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:07.939621  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:08.121478  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:08.519292  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:08.522400  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:08.626487  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:08.919824  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:08.935099  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:09.123034  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:09.427247  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:09.439663  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:09.630747  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:09.924829  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:09.937762  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:10.126266  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:10.423912  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:10.442758  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:10.829148  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:10.928186  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:10.938788  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:11.126344  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:11.423503  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:11.440161  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:11.628256  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:11.922200  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:12.026774  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:12.122410  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:12.425763  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:12.435748  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:12.620552  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:12.954050  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:12.957856  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:13.126813  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:13.421360  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:13.435025  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:13.629500  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:13.922707  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:13.935410  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:14.123341  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:14.426174  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:14.436803  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:14.622210  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:14.941433  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:14.941557  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:15.122789  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:15.422344  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:15.435838  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:15.620803  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:15.922769  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:15.936263  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:16.123330  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:16.420710  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:16.437443  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:16.622053  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:16.922695  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:16.940782  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:17.241963  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:17.422836  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:17.436564  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:17.623372  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:17.919854  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:17.948897  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:18.124153  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:18.423733  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:18.436717  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:18.622046  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:18.922805  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:18.935793  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:19.122329  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:19.425051  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:19.439118  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:19.619916  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:19.920748  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:19.937662  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:20.128846  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:20.427312  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:20.441072  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:20.627540  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:20.922225  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:20.935498  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:21.125438  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:21.421980  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:21.435607  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:21.622394  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:21.920638  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:21.935580  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:22.121779  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:22.425387  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:22.436106  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:22.622379  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:22.922035  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:22.939454  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:23.123644  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:23.422127  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:23.437099  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:23.621255  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:23.921598  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:23.936278  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:24.121938  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:24.421559  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:24.435263  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:24.621048  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:24.921427  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:24.936154  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:25.128780  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:25.436990  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:25.447989  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:25.627750  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:25.925784  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:25.936653  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:26.125097  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:26.421139  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:26.435288  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:26.621354  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:26.979865  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:26.982130  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:27.121596  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:27.421737  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:27.436413  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:27.622223  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:27.923259  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:27.938238  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:28.122777  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:28.422102  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:28.435098  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:28.624943  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:28.923578  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:28.934884  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:29.123227  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:29.422918  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:29.440055  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:29.621947  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:29.924766  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:29.943765  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:30.125218  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:30.427521  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:30.435473  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:30.622346  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:30.926321  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:30.935211  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:31.125820  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:31.423165  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:31.435981  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:31.624574  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:31.924255  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:31.937572  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:32.123297  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:32.420253  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:32.435092  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:32.620642  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:32.924708  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:32.936867  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:33.122959  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:33.421260  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:33.435115  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:33.622355  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:33.922446  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:33.937891  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:34.121936  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:34.422837  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:34.436876  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:34.621392  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:34.922989  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:34.936968  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:35.121994  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:35.420314  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:35.435229  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:35.620372  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:35.921246  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:35.935379  251263 kapi.go:107] duration metric: took 53.004380156s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1121 23:48:36.121002  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:36.421297  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:36.620475  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:36.920737  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:37.121903  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:37.420740  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:37.621573  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:37.920470  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:38.120871  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:38.419747  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:38.620870  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:38.919569  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:39.121472  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:39.420632  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:39.621914  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:39.919274  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:40.120595  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:40.420718  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:40.621509  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:40.920672  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:41.121166  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:41.422011  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:41.622380  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:41.921196  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:42.120596  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:42.420828  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:42.621388  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:42.921558  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:43.121925  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:43.419853  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:43.622393  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:43.920887  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:44.121285  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:44.420735  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:44.622063  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:44.920303  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:45.123622  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:45.422460  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:45.623240  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:45.938878  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:46.121145  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:46.421462  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:46.621556  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:46.920539  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:47.123242  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:47.434774  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:47.623534  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:47.929223  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:48.125077  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:48.421704  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:48.623369  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:48.922650  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:49.123639  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:49.421456  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:49.624574  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:49.931049  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:50.124348  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:50.420556  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:50.622234  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:50.924025  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:51.124075  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:51.423011  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:51.623295  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:51.920670  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:52.121233  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:52.424341  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:52.621172  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:52.921299  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:53.121769  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:53.420110  251263 kapi.go:107] duration metric: took 1m12.504106807s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1121 23:48:53.621962  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:54.127660  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:54.626400  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:55.122945  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:55.724403  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:56.123402  251263 kapi.go:107] duration metric: took 1m11.506366647s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1121 23:48:56.125238  251263 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-266876 cluster.
	I1121 23:48:56.126693  251263 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1121 23:48:56.128133  251263 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1121 23:48:56.129655  251263 out.go:179] * Enabled addons: amd-gpu-device-plugin, inspektor-gadget, ingress-dns, registry-creds, nvidia-device-plugin, cloud-spanner, storage-provisioner, default-storageclass, metrics-server, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1121 23:48:56.131230  251263 addons.go:530] duration metric: took 1m25.815935443s for enable addons: enabled=[amd-gpu-device-plugin inspektor-gadget ingress-dns registry-creds nvidia-device-plugin cloud-spanner storage-provisioner default-storageclass metrics-server yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1121 23:48:56.131297  251263 start.go:247] waiting for cluster config update ...
	I1121 23:48:56.131318  251263 start.go:256] writing updated cluster config ...
	I1121 23:48:56.131603  251263 ssh_runner.go:195] Run: rm -f paused
	I1121 23:48:56.139138  251263 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 23:48:56.143255  251263 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-tgk67" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:48:56.149223  251263 pod_ready.go:94] pod "coredns-66bc5c9577-tgk67" is "Ready"
	I1121 23:48:56.149248  251263 pod_ready.go:86] duration metric: took 5.967724ms for pod "coredns-66bc5c9577-tgk67" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:48:56.152622  251263 pod_ready.go:83] waiting for pod "etcd-addons-266876" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:48:56.158325  251263 pod_ready.go:94] pod "etcd-addons-266876" is "Ready"
	I1121 23:48:56.158348  251263 pod_ready.go:86] duration metric: took 5.699178ms for pod "etcd-addons-266876" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:48:56.161017  251263 pod_ready.go:83] waiting for pod "kube-apiserver-addons-266876" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:48:56.165701  251263 pod_ready.go:94] pod "kube-apiserver-addons-266876" is "Ready"
	I1121 23:48:56.165731  251263 pod_ready.go:86] duration metric: took 4.68133ms for pod "kube-apiserver-addons-266876" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:48:56.167794  251263 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-266876" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:48:56.546100  251263 pod_ready.go:94] pod "kube-controller-manager-addons-266876" is "Ready"
	I1121 23:48:56.546140  251263 pod_ready.go:86] duration metric: took 378.321116ms for pod "kube-controller-manager-addons-266876" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:48:56.744763  251263 pod_ready.go:83] waiting for pod "kube-proxy-d6jsf" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:48:57.145028  251263 pod_ready.go:94] pod "kube-proxy-d6jsf" is "Ready"
	I1121 23:48:57.145065  251263 pod_ready.go:86] duration metric: took 400.263759ms for pod "kube-proxy-d6jsf" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:48:57.344109  251263 pod_ready.go:83] waiting for pod "kube-scheduler-addons-266876" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:48:57.744881  251263 pod_ready.go:94] pod "kube-scheduler-addons-266876" is "Ready"
	I1121 23:48:57.744924  251263 pod_ready.go:86] duration metric: took 400.779811ms for pod "kube-scheduler-addons-266876" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:48:57.744942  251263 pod_ready.go:40] duration metric: took 1.605761032s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 23:48:57.792759  251263 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1121 23:48:57.794548  251263 out.go:179] * Done! kubectl is now configured to use "addons-266876" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 21 23:51:45 addons-266876 crio[816]: time="2025-11-21 23:51:45.553209868Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763769105553183131,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:532066,},InodesUsed:&UInt64Value{Value:186,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a41ebed2-b835-48fb-a024-b4f1e74207d0 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 21 23:51:45 addons-266876 crio[816]: time="2025-11-21 23:51:45.554701091Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9ee498f8-9be7-4b0d-baeb-52a497d97a67 name=/runtime.v1.RuntimeService/ListContainers
	Nov 21 23:51:45 addons-266876 crio[816]: time="2025-11-21 23:51:45.555172903Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9ee498f8-9be7-4b0d-baeb-52a497d97a67 name=/runtime.v1.RuntimeService/ListContainers
	Nov 21 23:51:45 addons-266876 crio[816]: time="2025-11-21 23:51:45.556060231Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:991f92b0bd577e0738eef35d65b7c9638d3df53ccea140bf89bfa778f911574f,PodSandboxId:f7f9ecdee49d2fbed73c95266531e98528194cc29d1af1e15c07c6a5e790026a,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1763768959705557696,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9e3ed8c7-5788-4d41-aba1-71043fc65fb1,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1205f66bfddc482ac5d5dd1e86224c67e867616e61123b413f3bb6856473dc12,PodSandboxId:7a5080c12c12abdaab939ff714e1e919b116a416135fb5a4c0954b2d73aa77d6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1763768941325630164,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6b5956ac-11bb-458f-953a-f0fa68bf575e,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f168b4d8b7da74efedb3be41e39c6d07020b9698695572f30b74f190a4d8dac,PodSandboxId:6d7ec67173c108730a451b146148cf342b99db36b05e9ea513110f7e26d0585e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1763768932397704115,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-lg7z6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 6a72ef50-d6e3-496e-bb81-685892037954,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:51813a3108d9e54ce1c3496176ac5114e7bd1188f2c3673f4a4a3480910eced6,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351f
d438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1763768914769733150,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:491a8ff7c586acede1f8b3b37821df605946465cc57d997a527260e81bc84cbe,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1763768913042350736,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62345e24511bafa136f68a223ce7ed0c511a449ccaac17d536939d218364c8e0,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[st
ring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1763768911235493555,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c36592147c998eee903b15108ec385188d6a10ba82bdbc75a1e806aedb354e7,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c
9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1763768910213861282,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:552ab85d759ae6b592b4d62982e120e0f046fdad6cf73d39fa8e079973301b19,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt
:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1763768908471046022,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b904d30a44673800c0c3034a976f6ac03bbb3ec299f6d92bb1a5c6ea170a7c57,PodSandboxId:ff134a61cd64ed6f5542d7c8c8469ae269bd32b9fdf955f510ccf7e0b5589fb8,Meta
data:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1763768907183648311,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7dfe8e9b-0142-42cf-ba29-27aeadf91605,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4683ce225f87d35eb79e6cefdb9d7c48be7cc40e3230995b8b0843525d0bd27,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7
d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1763768905559322925,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea2e4d571c23b9bf
7d4bfb72e3338c12264469ce70e7e72754afd69967515d13,PodSandboxId:5007bb0b80f021b12e7f6a9a43425ddf2f36685c53a522cd08d8b714c4ec20eb,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1763768903867338270,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79de7084-4282-49f8-a4d1-582323611ce3,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contain
er{Id:d36db081fc46e4a769de524c439df3776fa94dd533d426b7d39c2e1306653d01,PodSandboxId:554f3c9987e3066901d6ca4d92840af21c420dd97f4a4542964dfbb8d915e03e,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763768903014677441,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-ht8dl,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 85e84c47-6bc3-4409-8954-c24ef4d80f99,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9171d1cd70aebc77
78cc2ae6b609dbe0a17d4a5c28a86a5944f33b666258a45,PodSandboxId:3414eb7a0316ace15e9a899adee597efdbbd854673912b762e969800af6a4f8a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763768902335122991,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-xq799,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0cdf45b5-b337-4541-83d3-b7fdc232f1e1,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod:
30,},},&Container{Id:37dea366f964b8791137712d958e380263c762d6943592d23f145fad119cd6b5,PodSandboxId:1e73211f223b9745771ca2d0de7f25252821645aa5ecdbedb9590018486b8b7e,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1763768900663296805,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-gcprx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38cf49f5-ed6e-4aa5-bdfe-2494e5763f39,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container
.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16f748bb4b27c533f7b87016c8a98346dcf32afaef8d230d73d0764252cbb72f,PodSandboxId:1c267215c3e5b6dac05194130ddb58527757745ce779475a28dd1c84610883b7,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1763768900552896022,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-r57wx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 136bb70d-9950-46db-83d9-09b543dc4f72,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe7bb60492b04fd1b087025749469403adeef011272e8f3f22c00ada731cdcb3,PodSandboxId:15b64b5856939f8bac45fb57619eff6fafc932dc05ba97fe1943560b384bd630,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1763768898866225812,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-vl5f9,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 09a904d5-755f-4f1f-9525-b10e4e4b57a7,},Annotations:map[string]string{io.kubernetes.container.hash
: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7bc290854e78459afc859cfdef271a0dcca5688dfdec552b77d3bddd2556238,PodSandboxId:182146df6179175ac72d7de036cbf942c44e60401b302493f3eb9d4f09c65f1c,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1763768876563315596,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c8445af-f050-4525-a580-c6cb45567d21,},Annotations:map[
string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62fac18e2a4ec25dc356850c017cf9e035abf1bb8a3d580cf0053ef797061409,PodSandboxId:f1662e37013476f1ac5ede7d21406a137d8e8672c36dcf49068d30172dc639f2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763768860250918214,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: 2855a3de-b990-447c-b094-274b5becf1da,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d414f30f9b272529b258344187ca2317cbc5a4141f4ffe1bc6fa0f7df80bd5bb,PodSandboxId:79f2d64c3813a612bcd08d986b458be1fc1d2f0a3922d4db70b605b18db55f18,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1763768857987378401,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-pd4s
x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88fffae7-a3c2-46ef-a382-867c1f45dd2f,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e880e3438bfbb021c8b745fd3f9eff8bd901a71ba1ac5890af86a6f9ccce7198,PodSandboxId:9607023c4fe8e371d977e1e4a2b52b0e80675b763c0cd2b3ae209db00b96f2cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763768852094801650,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-tgk67,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: ad56ae13-a7c4-44e3-a817-73aa300110b6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ba59e7c8953d9801a0002a71d6b0d76ceb61d8240d18301e56bb626f422fb40,PodSandboxId:1ce41f042f494eef5d0be46b1db7d599bafdcc97fef69e0d7f1782bd275c54ee,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6
a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763768851042427654,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d6jsf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c9f1dbf-19b7-4f19-8f33-11b6886f1237,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d89e7dd43a03bbcb90d35b0084800be3d656bfa6da9cea1a1cc88c7dc51493d,PodSandboxId:a6e11d2b9834f78610e1487003d17adb09fe565622e235df1217303546294639,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string
]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763768839147876729,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-266876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d9c952f476472031bed61db83e3c978,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c5891e44197cae1d6761a70d34df341e353e30c01b0d5a7326a9560ace3813e,PodSandboxId:212b2600cae8f28bb69999f005d0b52383e46a97173ab430efc504c81b7dd8ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSp
ec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763768839130699997,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-266876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ca8736316ec035d06c4ec08eb70b85a,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b2349c8754b0dab18ba46ec5bfeab82bbf19463f511389a62129f35aebcece7,PodSandboxId:7fb7e928bee471d4ec16ce9fb677f233172c530080bee3cf5baf5b1
c28826973,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763768839107116372,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-266876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0a4452486189aa3ae5b593dc3a43cac,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a216f1821ac9afeecc64f7303745
36676eea6c11ca4c8f9bde7913e3b74b219,PodSandboxId:43d68a4f9086a85d2e8c987fa17f958b95725f1d9f37b716010e99d58676be1a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763768839078348144,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-266876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14e0cb0275164bea778c164d97826c53,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9ee498f8-9be7-4b0d-baeb-52a497d97a67 name=/runtime.v1.RuntimeService/ListContainers
	Nov 21 23:51:45 addons-266876 crio[816]: time="2025-11-21 23:51:45.595250671Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9a0175ce-1a4e-4773-acd3-56b792be6602 name=/runtime.v1.RuntimeService/Version
	Nov 21 23:51:45 addons-266876 crio[816]: time="2025-11-21 23:51:45.595349281Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9a0175ce-1a4e-4773-acd3-56b792be6602 name=/runtime.v1.RuntimeService/Version
	Nov 21 23:51:45 addons-266876 crio[816]: time="2025-11-21 23:51:45.596695558Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=05f73e53-9a5b-4231-ada4-99bc27d0ee89 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 21 23:51:45 addons-266876 crio[816]: time="2025-11-21 23:51:45.597866488Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763769105597839232,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:532066,},InodesUsed:&UInt64Value{Value:186,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=05f73e53-9a5b-4231-ada4-99bc27d0ee89 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 21 23:51:45 addons-266876 crio[816]: time="2025-11-21 23:51:45.599131293Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b99bbaa3-2b86-4b06-ab3b-aac04dce552c name=/runtime.v1.RuntimeService/ListContainers
	Nov 21 23:51:45 addons-266876 crio[816]: time="2025-11-21 23:51:45.599333747Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b99bbaa3-2b86-4b06-ab3b-aac04dce552c name=/runtime.v1.RuntimeService/ListContainers
	Nov 21 23:51:45 addons-266876 crio[816]: time="2025-11-21 23:51:45.600424108Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:991f92b0bd577e0738eef35d65b7c9638d3df53ccea140bf89bfa778f911574f,PodSandboxId:f7f9ecdee49d2fbed73c95266531e98528194cc29d1af1e15c07c6a5e790026a,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1763768959705557696,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9e3ed8c7-5788-4d41-aba1-71043fc65fb1,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1205f66bfddc482ac5d5dd1e86224c67e867616e61123b413f3bb6856473dc12,PodSandboxId:7a5080c12c12abdaab939ff714e1e919b116a416135fb5a4c0954b2d73aa77d6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1763768941325630164,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6b5956ac-11bb-458f-953a-f0fa68bf575e,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f168b4d8b7da74efedb3be41e39c6d07020b9698695572f30b74f190a4d8dac,PodSandboxId:6d7ec67173c108730a451b146148cf342b99db36b05e9ea513110f7e26d0585e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1763768932397704115,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-lg7z6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 6a72ef50-d6e3-496e-bb81-685892037954,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:51813a3108d9e54ce1c3496176ac5114e7bd1188f2c3673f4a4a3480910eced6,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351f
d438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1763768914769733150,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:491a8ff7c586acede1f8b3b37821df605946465cc57d997a527260e81bc84cbe,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1763768913042350736,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62345e24511bafa136f68a223ce7ed0c511a449ccaac17d536939d218364c8e0,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[st
ring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1763768911235493555,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c36592147c998eee903b15108ec385188d6a10ba82bdbc75a1e806aedb354e7,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c
9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1763768910213861282,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:552ab85d759ae6b592b4d62982e120e0f046fdad6cf73d39fa8e079973301b19,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt
:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1763768908471046022,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b904d30a44673800c0c3034a976f6ac03bbb3ec299f6d92bb1a5c6ea170a7c57,PodSandboxId:ff134a61cd64ed6f5542d7c8c8469ae269bd32b9fdf955f510ccf7e0b5589fb8,Meta
data:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1763768907183648311,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7dfe8e9b-0142-42cf-ba29-27aeadf91605,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4683ce225f87d35eb79e6cefdb9d7c48be7cc40e3230995b8b0843525d0bd27,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7
d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1763768905559322925,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea2e4d571c23b9bf
7d4bfb72e3338c12264469ce70e7e72754afd69967515d13,PodSandboxId:5007bb0b80f021b12e7f6a9a43425ddf2f36685c53a522cd08d8b714c4ec20eb,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1763768903867338270,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79de7084-4282-49f8-a4d1-582323611ce3,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contain
er{Id:d36db081fc46e4a769de524c439df3776fa94dd533d426b7d39c2e1306653d01,PodSandboxId:554f3c9987e3066901d6ca4d92840af21c420dd97f4a4542964dfbb8d915e03e,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763768903014677441,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-ht8dl,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 85e84c47-6bc3-4409-8954-c24ef4d80f99,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9171d1cd70aebc77
78cc2ae6b609dbe0a17d4a5c28a86a5944f33b666258a45,PodSandboxId:3414eb7a0316ace15e9a899adee597efdbbd854673912b762e969800af6a4f8a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763768902335122991,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-xq799,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0cdf45b5-b337-4541-83d3-b7fdc232f1e1,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod:
30,},},&Container{Id:37dea366f964b8791137712d958e380263c762d6943592d23f145fad119cd6b5,PodSandboxId:1e73211f223b9745771ca2d0de7f25252821645aa5ecdbedb9590018486b8b7e,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1763768900663296805,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-gcprx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38cf49f5-ed6e-4aa5-bdfe-2494e5763f39,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container
.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16f748bb4b27c533f7b87016c8a98346dcf32afaef8d230d73d0764252cbb72f,PodSandboxId:1c267215c3e5b6dac05194130ddb58527757745ce779475a28dd1c84610883b7,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1763768900552896022,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-r57wx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 136bb70d-9950-46db-83d9-09b543dc4f72,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe7bb60492b04fd1b087025749469403adeef011272e8f3f22c00ada731cdcb3,PodSandboxId:15b64b5856939f8bac45fb57619eff6fafc932dc05ba97fe1943560b384bd630,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1763768898866225812,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-vl5f9,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 09a904d5-755f-4f1f-9525-b10e4e4b57a7,},Annotations:map[string]string{io.kubernetes.container.hash
: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7bc290854e78459afc859cfdef271a0dcca5688dfdec552b77d3bddd2556238,PodSandboxId:182146df6179175ac72d7de036cbf942c44e60401b302493f3eb9d4f09c65f1c,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1763768876563315596,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c8445af-f050-4525-a580-c6cb45567d21,},Annotations:map[
string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62fac18e2a4ec25dc356850c017cf9e035abf1bb8a3d580cf0053ef797061409,PodSandboxId:f1662e37013476f1ac5ede7d21406a137d8e8672c36dcf49068d30172dc639f2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763768860250918214,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: 2855a3de-b990-447c-b094-274b5becf1da,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d414f30f9b272529b258344187ca2317cbc5a4141f4ffe1bc6fa0f7df80bd5bb,PodSandboxId:79f2d64c3813a612bcd08d986b458be1fc1d2f0a3922d4db70b605b18db55f18,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1763768857987378401,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-pd4s
x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88fffae7-a3c2-46ef-a382-867c1f45dd2f,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e880e3438bfbb021c8b745fd3f9eff8bd901a71ba1ac5890af86a6f9ccce7198,PodSandboxId:9607023c4fe8e371d977e1e4a2b52b0e80675b763c0cd2b3ae209db00b96f2cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763768852094801650,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-tgk67,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: ad56ae13-a7c4-44e3-a817-73aa300110b6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ba59e7c8953d9801a0002a71d6b0d76ceb61d8240d18301e56bb626f422fb40,PodSandboxId:1ce41f042f494eef5d0be46b1db7d599bafdcc97fef69e0d7f1782bd275c54ee,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6
a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763768851042427654,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d6jsf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c9f1dbf-19b7-4f19-8f33-11b6886f1237,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d89e7dd43a03bbcb90d35b0084800be3d656bfa6da9cea1a1cc88c7dc51493d,PodSandboxId:a6e11d2b9834f78610e1487003d17adb09fe565622e235df1217303546294639,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string
]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763768839147876729,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-266876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d9c952f476472031bed61db83e3c978,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c5891e44197cae1d6761a70d34df341e353e30c01b0d5a7326a9560ace3813e,PodSandboxId:212b2600cae8f28bb69999f005d0b52383e46a97173ab430efc504c81b7dd8ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSp
ec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763768839130699997,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-266876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ca8736316ec035d06c4ec08eb70b85a,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b2349c8754b0dab18ba46ec5bfeab82bbf19463f511389a62129f35aebcece7,PodSandboxId:7fb7e928bee471d4ec16ce9fb677f233172c530080bee3cf5baf5b1
c28826973,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763768839107116372,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-266876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0a4452486189aa3ae5b593dc3a43cac,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a216f1821ac9afeecc64f7303745
36676eea6c11ca4c8f9bde7913e3b74b219,PodSandboxId:43d68a4f9086a85d2e8c987fa17f958b95725f1d9f37b716010e99d58676be1a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763768839078348144,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-266876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14e0cb0275164bea778c164d97826c53,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b99bbaa3-2b86-4b06-ab3b-aac04dce552c name=/runtime.v1.RuntimeService/ListContainers
	Nov 21 23:51:45 addons-266876 crio[816]: time="2025-11-21 23:51:45.634514148Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4dbfa20a-d21c-4ac6-97be-75e76f3f1df4 name=/runtime.v1.RuntimeService/Version
	Nov 21 23:51:45 addons-266876 crio[816]: time="2025-11-21 23:51:45.634800173Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4dbfa20a-d21c-4ac6-97be-75e76f3f1df4 name=/runtime.v1.RuntimeService/Version
	Nov 21 23:51:45 addons-266876 crio[816]: time="2025-11-21 23:51:45.636504279Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a1b2d423-d513-49b6-83ec-8dc34cad77bf name=/runtime.v1.ImageService/ImageFsInfo
	Nov 21 23:51:45 addons-266876 crio[816]: time="2025-11-21 23:51:45.638000468Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763769105637972480,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:532066,},InodesUsed:&UInt64Value{Value:186,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a1b2d423-d513-49b6-83ec-8dc34cad77bf name=/runtime.v1.ImageService/ImageFsInfo
	Nov 21 23:51:45 addons-266876 crio[816]: time="2025-11-21 23:51:45.639058522Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=140fc03c-5931-4bd9-ad2b-735f2828185a name=/runtime.v1.RuntimeService/ListContainers
	Nov 21 23:51:45 addons-266876 crio[816]: time="2025-11-21 23:51:45.639123699Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=140fc03c-5931-4bd9-ad2b-735f2828185a name=/runtime.v1.RuntimeService/ListContainers
	Nov 21 23:51:45 addons-266876 crio[816]: time="2025-11-21 23:51:45.639594722Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:991f92b0bd577e0738eef35d65b7c9638d3df53ccea140bf89bfa778f911574f,PodSandboxId:f7f9ecdee49d2fbed73c95266531e98528194cc29d1af1e15c07c6a5e790026a,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1763768959705557696,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9e3ed8c7-5788-4d41-aba1-71043fc65fb1,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1205f66bfddc482ac5d5dd1e86224c67e867616e61123b413f3bb6856473dc12,PodSandboxId:7a5080c12c12abdaab939ff714e1e919b116a416135fb5a4c0954b2d73aa77d6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1763768941325630164,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6b5956ac-11bb-458f-953a-f0fa68bf575e,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f168b4d8b7da74efedb3be41e39c6d07020b9698695572f30b74f190a4d8dac,PodSandboxId:6d7ec67173c108730a451b146148cf342b99db36b05e9ea513110f7e26d0585e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1763768932397704115,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-lg7z6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 6a72ef50-d6e3-496e-bb81-685892037954,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:51813a3108d9e54ce1c3496176ac5114e7bd1188f2c3673f4a4a3480910eced6,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351f
d438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1763768914769733150,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:491a8ff7c586acede1f8b3b37821df605946465cc57d997a527260e81bc84cbe,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1763768913042350736,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62345e24511bafa136f68a223ce7ed0c511a449ccaac17d536939d218364c8e0,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[st
ring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1763768911235493555,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c36592147c998eee903b15108ec385188d6a10ba82bdbc75a1e806aedb354e7,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c
9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1763768910213861282,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:552ab85d759ae6b592b4d62982e120e0f046fdad6cf73d39fa8e079973301b19,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt
:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1763768908471046022,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b904d30a44673800c0c3034a976f6ac03bbb3ec299f6d92bb1a5c6ea170a7c57,PodSandboxId:ff134a61cd64ed6f5542d7c8c8469ae269bd32b9fdf955f510ccf7e0b5589fb8,Meta
data:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1763768907183648311,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7dfe8e9b-0142-42cf-ba29-27aeadf91605,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4683ce225f87d35eb79e6cefdb9d7c48be7cc40e3230995b8b0843525d0bd27,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7
d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1763768905559322925,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea2e4d571c23b9bf
7d4bfb72e3338c12264469ce70e7e72754afd69967515d13,PodSandboxId:5007bb0b80f021b12e7f6a9a43425ddf2f36685c53a522cd08d8b714c4ec20eb,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1763768903867338270,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79de7084-4282-49f8-a4d1-582323611ce3,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contain
er{Id:d36db081fc46e4a769de524c439df3776fa94dd533d426b7d39c2e1306653d01,PodSandboxId:554f3c9987e3066901d6ca4d92840af21c420dd97f4a4542964dfbb8d915e03e,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763768903014677441,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-ht8dl,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 85e84c47-6bc3-4409-8954-c24ef4d80f99,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9171d1cd70aebc77
78cc2ae6b609dbe0a17d4a5c28a86a5944f33b666258a45,PodSandboxId:3414eb7a0316ace15e9a899adee597efdbbd854673912b762e969800af6a4f8a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763768902335122991,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-xq799,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0cdf45b5-b337-4541-83d3-b7fdc232f1e1,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod:
30,},},&Container{Id:37dea366f964b8791137712d958e380263c762d6943592d23f145fad119cd6b5,PodSandboxId:1e73211f223b9745771ca2d0de7f25252821645aa5ecdbedb9590018486b8b7e,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1763768900663296805,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-gcprx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38cf49f5-ed6e-4aa5-bdfe-2494e5763f39,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container
.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16f748bb4b27c533f7b87016c8a98346dcf32afaef8d230d73d0764252cbb72f,PodSandboxId:1c267215c3e5b6dac05194130ddb58527757745ce779475a28dd1c84610883b7,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1763768900552896022,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-r57wx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 136bb70d-9950-46db-83d9-09b543dc4f72,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe7bb60492b04fd1b087025749469403adeef011272e8f3f22c00ada731cdcb3,PodSandboxId:15b64b5856939f8bac45fb57619eff6fafc932dc05ba97fe1943560b384bd630,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1763768898866225812,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-vl5f9,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 09a904d5-755f-4f1f-9525-b10e4e4b57a7,},Annotations:map[string]string{io.kubernetes.container.hash
: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7bc290854e78459afc859cfdef271a0dcca5688dfdec552b77d3bddd2556238,PodSandboxId:182146df6179175ac72d7de036cbf942c44e60401b302493f3eb9d4f09c65f1c,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1763768876563315596,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c8445af-f050-4525-a580-c6cb45567d21,},Annotations:map[
string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62fac18e2a4ec25dc356850c017cf9e035abf1bb8a3d580cf0053ef797061409,PodSandboxId:f1662e37013476f1ac5ede7d21406a137d8e8672c36dcf49068d30172dc639f2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763768860250918214,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: 2855a3de-b990-447c-b094-274b5becf1da,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d414f30f9b272529b258344187ca2317cbc5a4141f4ffe1bc6fa0f7df80bd5bb,PodSandboxId:79f2d64c3813a612bcd08d986b458be1fc1d2f0a3922d4db70b605b18db55f18,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1763768857987378401,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-pd4s
x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88fffae7-a3c2-46ef-a382-867c1f45dd2f,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e880e3438bfbb021c8b745fd3f9eff8bd901a71ba1ac5890af86a6f9ccce7198,PodSandboxId:9607023c4fe8e371d977e1e4a2b52b0e80675b763c0cd2b3ae209db00b96f2cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763768852094801650,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-tgk67,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: ad56ae13-a7c4-44e3-a817-73aa300110b6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ba59e7c8953d9801a0002a71d6b0d76ceb61d8240d18301e56bb626f422fb40,PodSandboxId:1ce41f042f494eef5d0be46b1db7d599bafdcc97fef69e0d7f1782bd275c54ee,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6
a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763768851042427654,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d6jsf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c9f1dbf-19b7-4f19-8f33-11b6886f1237,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d89e7dd43a03bbcb90d35b0084800be3d656bfa6da9cea1a1cc88c7dc51493d,PodSandboxId:a6e11d2b9834f78610e1487003d17adb09fe565622e235df1217303546294639,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string
]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763768839147876729,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-266876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d9c952f476472031bed61db83e3c978,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c5891e44197cae1d6761a70d34df341e353e30c01b0d5a7326a9560ace3813e,PodSandboxId:212b2600cae8f28bb69999f005d0b52383e46a97173ab430efc504c81b7dd8ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSp
ec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763768839130699997,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-266876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ca8736316ec035d06c4ec08eb70b85a,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b2349c8754b0dab18ba46ec5bfeab82bbf19463f511389a62129f35aebcece7,PodSandboxId:7fb7e928bee471d4ec16ce9fb677f233172c530080bee3cf5baf5b1
c28826973,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763768839107116372,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-266876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0a4452486189aa3ae5b593dc3a43cac,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a216f1821ac9afeecc64f7303745
36676eea6c11ca4c8f9bde7913e3b74b219,PodSandboxId:43d68a4f9086a85d2e8c987fa17f958b95725f1d9f37b716010e99d58676be1a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763768839078348144,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-266876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14e0cb0275164bea778c164d97826c53,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=140fc03c-5931-4bd9-ad2b-735f2828185a name=/runtime.v1.RuntimeService/ListContainers
	Nov 21 23:51:45 addons-266876 crio[816]: time="2025-11-21 23:51:45.673498692Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dba9004b-37cb-4d1a-af77-2d995e943938 name=/runtime.v1.RuntimeService/Version
	Nov 21 23:51:45 addons-266876 crio[816]: time="2025-11-21 23:51:45.673614073Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dba9004b-37cb-4d1a-af77-2d995e943938 name=/runtime.v1.RuntimeService/Version
	Nov 21 23:51:45 addons-266876 crio[816]: time="2025-11-21 23:51:45.675790250Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=55ad8030-2fda-4c41-9313-714466988958 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 21 23:51:45 addons-266876 crio[816]: time="2025-11-21 23:51:45.677357697Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763769105677268717,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:532066,},InodesUsed:&UInt64Value{Value:186,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=55ad8030-2fda-4c41-9313-714466988958 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 21 23:51:45 addons-266876 crio[816]: time="2025-11-21 23:51:45.678703487Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7529c168-9b69-4604-ad08-95ce4d9a7fa4 name=/runtime.v1.RuntimeService/ListContainers
	Nov 21 23:51:45 addons-266876 crio[816]: time="2025-11-21 23:51:45.678783145Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7529c168-9b69-4604-ad08-95ce4d9a7fa4 name=/runtime.v1.RuntimeService/ListContainers
	Nov 21 23:51:45 addons-266876 crio[816]: time="2025-11-21 23:51:45.679341937Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:991f92b0bd577e0738eef35d65b7c9638d3df53ccea140bf89bfa778f911574f,PodSandboxId:f7f9ecdee49d2fbed73c95266531e98528194cc29d1af1e15c07c6a5e790026a,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1763768959705557696,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9e3ed8c7-5788-4d41-aba1-71043fc65fb1,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1205f66bfddc482ac5d5dd1e86224c67e867616e61123b413f3bb6856473dc12,PodSandboxId:7a5080c12c12abdaab939ff714e1e919b116a416135fb5a4c0954b2d73aa77d6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1763768941325630164,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6b5956ac-11bb-458f-953a-f0fa68bf575e,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f168b4d8b7da74efedb3be41e39c6d07020b9698695572f30b74f190a4d8dac,PodSandboxId:6d7ec67173c108730a451b146148cf342b99db36b05e9ea513110f7e26d0585e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1763768932397704115,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-lg7z6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 6a72ef50-d6e3-496e-bb81-685892037954,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:51813a3108d9e54ce1c3496176ac5114e7bd1188f2c3673f4a4a3480910eced6,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351f
d438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1763768914769733150,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:491a8ff7c586acede1f8b3b37821df605946465cc57d997a527260e81bc84cbe,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1763768913042350736,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62345e24511bafa136f68a223ce7ed0c511a449ccaac17d536939d218364c8e0,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[st
ring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1763768911235493555,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c36592147c998eee903b15108ec385188d6a10ba82bdbc75a1e806aedb354e7,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c
9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1763768910213861282,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:552ab85d759ae6b592b4d62982e120e0f046fdad6cf73d39fa8e079973301b19,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt
:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1763768908471046022,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b904d30a44673800c0c3034a976f6ac03bbb3ec299f6d92bb1a5c6ea170a7c57,PodSandboxId:ff134a61cd64ed6f5542d7c8c8469ae269bd32b9fdf955f510ccf7e0b5589fb8,Meta
data:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1763768907183648311,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7dfe8e9b-0142-42cf-ba29-27aeadf91605,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4683ce225f87d35eb79e6cefdb9d7c48be7cc40e3230995b8b0843525d0bd27,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7
d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1763768905559322925,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea2e4d571c23b9bf
7d4bfb72e3338c12264469ce70e7e72754afd69967515d13,PodSandboxId:5007bb0b80f021b12e7f6a9a43425ddf2f36685c53a522cd08d8b714c4ec20eb,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1763768903867338270,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79de7084-4282-49f8-a4d1-582323611ce3,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contain
er{Id:d36db081fc46e4a769de524c439df3776fa94dd533d426b7d39c2e1306653d01,PodSandboxId:554f3c9987e3066901d6ca4d92840af21c420dd97f4a4542964dfbb8d915e03e,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763768903014677441,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-ht8dl,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 85e84c47-6bc3-4409-8954-c24ef4d80f99,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9171d1cd70aebc77
78cc2ae6b609dbe0a17d4a5c28a86a5944f33b666258a45,PodSandboxId:3414eb7a0316ace15e9a899adee597efdbbd854673912b762e969800af6a4f8a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763768902335122991,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-xq799,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0cdf45b5-b337-4541-83d3-b7fdc232f1e1,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod:
30,},},&Container{Id:37dea366f964b8791137712d958e380263c762d6943592d23f145fad119cd6b5,PodSandboxId:1e73211f223b9745771ca2d0de7f25252821645aa5ecdbedb9590018486b8b7e,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1763768900663296805,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-gcprx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38cf49f5-ed6e-4aa5-bdfe-2494e5763f39,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container
.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16f748bb4b27c533f7b87016c8a98346dcf32afaef8d230d73d0764252cbb72f,PodSandboxId:1c267215c3e5b6dac05194130ddb58527757745ce779475a28dd1c84610883b7,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1763768900552896022,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-r57wx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 136bb70d-9950-46db-83d9-09b543dc4f72,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe7bb60492b04fd1b087025749469403adeef011272e8f3f22c00ada731cdcb3,PodSandboxId:15b64b5856939f8bac45fb57619eff6fafc932dc05ba97fe1943560b384bd630,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1763768898866225812,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-vl5f9,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 09a904d5-755f-4f1f-9525-b10e4e4b57a7,},Annotations:map[string]string{io.kubernetes.container.hash
: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7bc290854e78459afc859cfdef271a0dcca5688dfdec552b77d3bddd2556238,PodSandboxId:182146df6179175ac72d7de036cbf942c44e60401b302493f3eb9d4f09c65f1c,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1763768876563315596,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c8445af-f050-4525-a580-c6cb45567d21,},Annotations:map[
string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62fac18e2a4ec25dc356850c017cf9e035abf1bb8a3d580cf0053ef797061409,PodSandboxId:f1662e37013476f1ac5ede7d21406a137d8e8672c36dcf49068d30172dc639f2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763768860250918214,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: 2855a3de-b990-447c-b094-274b5becf1da,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d414f30f9b272529b258344187ca2317cbc5a4141f4ffe1bc6fa0f7df80bd5bb,PodSandboxId:79f2d64c3813a612bcd08d986b458be1fc1d2f0a3922d4db70b605b18db55f18,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1763768857987378401,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-pd4s
x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88fffae7-a3c2-46ef-a382-867c1f45dd2f,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e880e3438bfbb021c8b745fd3f9eff8bd901a71ba1ac5890af86a6f9ccce7198,PodSandboxId:9607023c4fe8e371d977e1e4a2b52b0e80675b763c0cd2b3ae209db00b96f2cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763768852094801650,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-tgk67,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: ad56ae13-a7c4-44e3-a817-73aa300110b6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ba59e7c8953d9801a0002a71d6b0d76ceb61d8240d18301e56bb626f422fb40,PodSandboxId:1ce41f042f494eef5d0be46b1db7d599bafdcc97fef69e0d7f1782bd275c54ee,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6
a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763768851042427654,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d6jsf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c9f1dbf-19b7-4f19-8f33-11b6886f1237,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d89e7dd43a03bbcb90d35b0084800be3d656bfa6da9cea1a1cc88c7dc51493d,PodSandboxId:a6e11d2b9834f78610e1487003d17adb09fe565622e235df1217303546294639,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string
]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763768839147876729,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-266876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d9c952f476472031bed61db83e3c978,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c5891e44197cae1d6761a70d34df341e353e30c01b0d5a7326a9560ace3813e,PodSandboxId:212b2600cae8f28bb69999f005d0b52383e46a97173ab430efc504c81b7dd8ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSp
ec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763768839130699997,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-266876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ca8736316ec035d06c4ec08eb70b85a,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b2349c8754b0dab18ba46ec5bfeab82bbf19463f511389a62129f35aebcece7,PodSandboxId:7fb7e928bee471d4ec16ce9fb677f233172c530080bee3cf5baf5b1
c28826973,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763768839107116372,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-266876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0a4452486189aa3ae5b593dc3a43cac,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a216f1821ac9afeecc64f7303745
36676eea6c11ca4c8f9bde7913e3b74b219,PodSandboxId:43d68a4f9086a85d2e8c987fa17f958b95725f1d9f37b716010e99d58676be1a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763768839078348144,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-266876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14e0cb0275164bea778c164d97826c53,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7529c168-9b69-4604-ad08-95ce4d9a7fa4 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	991f92b0bd577       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                                              2 minutes ago       Running             nginx                                    0                   f7f9ecdee49d2       nginx                                      default
	1205f66bfddc4       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          2 minutes ago       Running             busybox                                  0                   7a5080c12c12a       busybox                                    default
	3f168b4d8b7da       registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27                             2 minutes ago       Running             controller                               0                   6d7ec67173c10       ingress-nginx-controller-6c8bf45fb-lg7z6   ingress-nginx
	51813a3108d9e       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          3 minutes ago       Running             csi-snapshotter                          0                   d6163d79acc66       csi-hostpathplugin-gvwq9                   kube-system
	491a8ff7c586a       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          3 minutes ago       Running             csi-provisioner                          0                   d6163d79acc66       csi-hostpathplugin-gvwq9                   kube-system
	62345e24511ba       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            3 minutes ago       Running             liveness-probe                           0                   d6163d79acc66       csi-hostpathplugin-gvwq9                   kube-system
	4c36592147c99       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           3 minutes ago       Running             hostpath                                 0                   d6163d79acc66       csi-hostpathplugin-gvwq9                   kube-system
	552ab85d759ae       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                3 minutes ago       Running             node-driver-registrar                    0                   d6163d79acc66       csi-hostpathplugin-gvwq9                   kube-system
	b904d30a44673       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             3 minutes ago       Running             csi-attacher                             0                   ff134a61cd64e       csi-hostpath-attacher-0                    kube-system
	b4683ce225f87       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   3 minutes ago       Running             csi-external-health-monitor-controller   0                   d6163d79acc66       csi-hostpathplugin-gvwq9                   kube-system
	ea2e4d571c23b       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              3 minutes ago       Running             csi-resizer                              0                   5007bb0b80f02       csi-hostpath-resizer-0                     kube-system
	d36db081fc46e       884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45                                                                             3 minutes ago       Exited              patch                                    1                   554f3c9987e30       ingress-nginx-admission-patch-ht8dl        ingress-nginx
	d9171d1cd70ae       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f                   3 minutes ago       Exited              create                                   0                   3414eb7a0316a       ingress-nginx-admission-create-xq799       ingress-nginx
	37dea366f964b       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago       Running             volume-snapshot-controller               0                   1e73211f223b9       snapshot-controller-7d9fbc56b8-gcprx       kube-system
	16f748bb4b27c       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago       Running             volume-snapshot-controller               0                   1c267215c3e5b       snapshot-controller-7d9fbc56b8-r57wx       kube-system
	fe7bb60492b04       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             3 minutes ago       Running             local-path-provisioner                   0                   15b64b5856939       local-path-provisioner-648f6765c9-vl5f9    local-path-storage
	e7bc290854e78       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               3 minutes ago       Running             minikube-ingress-dns                     0                   182146df61791       kube-ingress-dns-minikube                  kube-system
	62fac18e2a4ec       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             4 minutes ago       Running             storage-provisioner                      0                   f1662e3701347       storage-provisioner                        kube-system
	d414f30f9b272       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     4 minutes ago       Running             amd-gpu-device-plugin                    0                   79f2d64c3813a       amd-gpu-device-plugin-pd4sx                kube-system
	e880e3438bfbb       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             4 minutes ago       Running             coredns                                  0                   9607023c4fe8e       coredns-66bc5c9577-tgk67                   kube-system
	9ba59e7c8953d       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             4 minutes ago       Running             kube-proxy                               0                   1ce41f042f494       kube-proxy-d6jsf                           kube-system
	8d89e7dd43a03       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             4 minutes ago       Running             kube-scheduler                           0                   a6e11d2b9834f       kube-scheduler-addons-266876               kube-system
	5c5891e44197c       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             4 minutes ago       Running             etcd                                     0                   212b2600cae8f       etcd-addons-266876                         kube-system
	9b2349c8754b0       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             4 minutes ago       Running             kube-apiserver                           0                   7fb7e928bee47       kube-apiserver-addons-266876               kube-system
	3a216f1821ac9       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             4 minutes ago       Running             kube-controller-manager                  0                   43d68a4f9086a       kube-controller-manager-addons-266876      kube-system
	
	
	==> coredns [e880e3438bfbb021c8b745fd3f9eff8bd901a71ba1ac5890af86a6f9ccce7198] <==
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	[INFO] Reloading complete
	[INFO] 10.244.0.23:38034 - 36875 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000620114s
	[INFO] 10.244.0.23:40973 - 25486 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000178533s
	[INFO] 10.244.0.23:41681 - 40964 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000163049s
	[INFO] 10.244.0.23:47936 - 55627 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000146061s
	[INFO] 10.244.0.23:57173 - 44150 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.001233146s
	[INFO] 10.244.0.23:48993 - 8029 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000276551s
	[INFO] 10.244.0.23:50684 - 42721 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001523821s
	[INFO] 10.244.0.23:45784 - 22668 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.001444737s
	[INFO] 10.244.0.27:39628 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000320107s
	[INFO] 10.244.0.27:34513 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000140101s
	
	
	==> describe nodes <==
	Name:               addons-266876
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-266876
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785
	                    minikube.k8s.io/name=addons-266876
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_21T23_47_26_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-266876
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-266876"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Nov 2025 23:47:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-266876
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Nov 2025 23:51:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Nov 2025 23:49:58 +0000   Fri, 21 Nov 2025 23:47:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Nov 2025 23:49:58 +0000   Fri, 21 Nov 2025 23:47:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Nov 2025 23:49:58 +0000   Fri, 21 Nov 2025 23:47:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Nov 2025 23:49:58 +0000   Fri, 21 Nov 2025 23:47:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.50
	  Hostname:    addons-266876
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 c4a95d5c27154bec8bc2a50909bf4217
	  System UUID:                c4a95d5c-2715-4bec-8bc2-a50909bf4217
	  Boot ID:                    7afcec11-c11b-4436-b252-c2dac139e51f
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (20 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m48s
	  default                     hello-world-app-5d498dc89-sqvxb             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m30s
	  default                     task-pv-pod                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m5s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-lg7z6    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m7s
	  kube-system                 amd-gpu-device-plugin-pd4sx                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m13s
	  kube-system                 coredns-66bc5c9577-tgk67                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m16s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 csi-hostpathplugin-gvwq9                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 etcd-addons-266876                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m22s
	  kube-system                 kube-apiserver-addons-266876                250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m21s
	  kube-system                 kube-controller-manager-addons-266876       200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m21s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m10s
	  kube-system                 kube-proxy-d6jsf                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m16s
	  kube-system                 kube-scheduler-addons-266876                100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m21s
	  kube-system                 snapshot-controller-7d9fbc56b8-gcprx        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m5s
	  kube-system                 snapshot-controller-7d9fbc56b8-r57wx        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m5s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m9s
	  local-path-storage          local-path-provisioner-648f6765c9-vl5f9     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m13s                  kube-proxy       
	  Normal  Starting                 4m28s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m28s (x8 over 4m28s)  kubelet          Node addons-266876 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m28s (x8 over 4m28s)  kubelet          Node addons-266876 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m28s (x7 over 4m28s)  kubelet          Node addons-266876 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m28s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m21s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m21s                  kubelet          Node addons-266876 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m21s                  kubelet          Node addons-266876 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m21s                  kubelet          Node addons-266876 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m20s                  kubelet          Node addons-266876 status is now: NodeReady
	  Normal  RegisteredNode           4m17s                  node-controller  Node addons-266876 event: Registered Node addons-266876 in Controller
	
	
	==> dmesg <==
	[  +0.350546] kauditd_printk_skb: 18 callbacks suppressed
	[  +1.615207] kauditd_printk_skb: 297 callbacks suppressed
	[  +1.386553] kauditd_printk_skb: 314 callbacks suppressed
	[  +3.245635] kauditd_printk_skb: 404 callbacks suppressed
	[  +8.078733] kauditd_printk_skb: 5 callbacks suppressed
	[Nov21 23:48] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.490595] kauditd_printk_skb: 26 callbacks suppressed
	[  +6.260482] kauditd_printk_skb: 38 callbacks suppressed
	[  +5.041216] kauditd_printk_skb: 113 callbacks suppressed
	[  +5.004515] kauditd_printk_skb: 80 callbacks suppressed
	[  +3.836804] kauditd_printk_skb: 136 callbacks suppressed
	[  +4.200452] kauditd_printk_skb: 82 callbacks suppressed
	[  +0.000029] kauditd_printk_skb: 5 callbacks suppressed
	[  +0.000031] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.254098] kauditd_printk_skb: 53 callbacks suppressed
	[Nov21 23:49] kauditd_printk_skb: 47 callbacks suppressed
	[  +9.475817] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.686428] kauditd_printk_skb: 22 callbacks suppressed
	[  +4.598673] kauditd_printk_skb: 95 callbacks suppressed
	[  +1.253211] kauditd_printk_skb: 79 callbacks suppressed
	[  +0.652321] kauditd_printk_skb: 101 callbacks suppressed
	[  +0.000031] kauditd_printk_skb: 20 callbacks suppressed
	[  +9.880165] kauditd_printk_skb: 114 callbacks suppressed
	[Nov21 23:51] kauditd_printk_skb: 22 callbacks suppressed
	[  +0.811687] kauditd_printk_skb: 51 callbacks suppressed
	
	
	==> etcd [5c5891e44197cae1d6761a70d34df341e353e30c01b0d5a7326a9560ace3813e] <==
	{"level":"info","ts":"2025-11-21T23:47:56.169034Z","caller":"traceutil/trace.go:172","msg":"trace[503038924] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:933; }","duration":"252.581534ms","start":"2025-11-21T23:47:55.916448Z","end":"2025-11-21T23:47:56.169029Z","steps":["trace[503038924] 'agreement among raft nodes before linearized reading'  (duration: 252.55235ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-21T23:47:59.352083Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:47:59.363648Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:47:59.513514Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:47:59.589561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57252","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-21T23:48:08.513782Z","caller":"traceutil/trace.go:172","msg":"trace[715400112] transaction","detail":"{read_only:false; response_revision:978; number_of_response:1; }","duration":"116.418162ms","start":"2025-11-21T23:48:08.397351Z","end":"2025-11-21T23:48:08.513770Z","steps":["trace[715400112] 'process raft request'  (duration: 116.119443ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-21T23:48:10.824125Z","caller":"traceutil/trace.go:172","msg":"trace[2036679806] linearizableReadLoop","detail":"{readStateIndex:1014; appliedIndex:1014; }","duration":"203.849321ms","start":"2025-11-21T23:48:10.620261Z","end":"2025-11-21T23:48:10.824110Z","steps":["trace[2036679806] 'read index received'  (duration: 203.843953ms)","trace[2036679806] 'applied index is now lower than readState.Index'  (duration: 4.512µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-21T23:48:10.824235Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"203.952821ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-21T23:48:10.824255Z","caller":"traceutil/trace.go:172","msg":"trace[1038609178] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:986; }","duration":"203.992763ms","start":"2025-11-21T23:48:10.620257Z","end":"2025-11-21T23:48:10.824249Z","steps":["trace[1038609178] 'agreement among raft nodes before linearized reading'  (duration: 203.924903ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-21T23:48:10.827067Z","caller":"traceutil/trace.go:172","msg":"trace[958942931] transaction","detail":"{read_only:false; response_revision:987; number_of_response:1; }","duration":"216.790232ms","start":"2025-11-21T23:48:10.610267Z","end":"2025-11-21T23:48:10.827057Z","steps":["trace[958942931] 'process raft request'  (duration: 213.950708ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-21T23:48:17.235529Z","caller":"traceutil/trace.go:172","msg":"trace[2072959660] linearizableReadLoop","detail":"{readStateIndex:1040; appliedIndex:1040; }","duration":"118.859084ms","start":"2025-11-21T23:48:17.116651Z","end":"2025-11-21T23:48:17.235510Z","steps":["trace[2072959660] 'read index received'  (duration: 118.853824ms)","trace[2072959660] 'applied index is now lower than readState.Index'  (duration: 4.479µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-21T23:48:17.235633Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"118.964818ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-21T23:48:17.235650Z","caller":"traceutil/trace.go:172","msg":"trace[1291312129] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1011; }","duration":"118.997232ms","start":"2025-11-21T23:48:17.116647Z","end":"2025-11-21T23:48:17.235645Z","steps":["trace[1291312129] 'agreement among raft nodes before linearized reading'  (duration: 118.929178ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-21T23:48:17.236014Z","caller":"traceutil/trace.go:172","msg":"trace[409496112] transaction","detail":"{read_only:false; response_revision:1012; number_of_response:1; }","duration":"245.19274ms","start":"2025-11-21T23:48:16.990813Z","end":"2025-11-21T23:48:17.236006Z","steps":["trace[409496112] 'process raft request'  (duration: 245.052969ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-21T23:48:20.410362Z","caller":"traceutil/trace.go:172","msg":"trace[828505748] transaction","detail":"{read_only:false; response_revision:1027; number_of_response:1; }","duration":"157.893848ms","start":"2025-11-21T23:48:20.252456Z","end":"2025-11-21T23:48:20.410350Z","steps":["trace[828505748] 'process raft request'  (duration: 157.749487ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-21T23:48:26.972869Z","caller":"traceutil/trace.go:172","msg":"trace[583749754] transaction","detail":"{read_only:false; response_revision:1085; number_of_response:1; }","duration":"180.54926ms","start":"2025-11-21T23:48:26.792295Z","end":"2025-11-21T23:48:26.972845Z","steps":["trace[583749754] 'process raft request'  (duration: 180.444491ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-21T23:48:55.718332Z","caller":"traceutil/trace.go:172","msg":"trace[218102785] linearizableReadLoop","detail":"{readStateIndex:1235; appliedIndex:1235; }","duration":"102.447461ms","start":"2025-11-21T23:48:55.615863Z","end":"2025-11-21T23:48:55.718310Z","steps":["trace[218102785] 'read index received'  (duration: 102.442519ms)","trace[218102785] 'applied index is now lower than readState.Index'  (duration: 4.145µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-21T23:48:55.718517Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"102.662851ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-21T23:48:55.718556Z","caller":"traceutil/trace.go:172","msg":"trace[280205783] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1197; }","duration":"102.741104ms","start":"2025-11-21T23:48:55.615807Z","end":"2025-11-21T23:48:55.718548Z","steps":["trace[280205783] 'agreement among raft nodes before linearized reading'  (duration: 102.634025ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-21T23:48:55.718853Z","caller":"traceutil/trace.go:172","msg":"trace[1563407473] transaction","detail":"{read_only:false; response_revision:1198; number_of_response:1; }","duration":"160.082369ms","start":"2025-11-21T23:48:55.558762Z","end":"2025-11-21T23:48:55.718844Z","steps":["trace[1563407473] 'process raft request'  (duration: 160.006081ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-21T23:49:25.230279Z","caller":"traceutil/trace.go:172","msg":"trace[1746671191] transaction","detail":"{read_only:false; response_revision:1422; number_of_response:1; }","duration":"130.337483ms","start":"2025-11-21T23:49:25.099914Z","end":"2025-11-21T23:49:25.230251Z","steps":["trace[1746671191] 'process raft request'  (duration: 128.456166ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-21T23:49:31.443123Z","caller":"traceutil/trace.go:172","msg":"trace[1229097043] linearizableReadLoop","detail":"{readStateIndex:1512; appliedIndex:1512; }","duration":"121.2839ms","start":"2025-11-21T23:49:31.321821Z","end":"2025-11-21T23:49:31.443104Z","steps":["trace[1229097043] 'read index received'  (duration: 121.277728ms)","trace[1229097043] 'applied index is now lower than readState.Index'  (duration: 4.966µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-21T23:49:31.443287Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"121.446592ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-21T23:49:31.443311Z","caller":"traceutil/trace.go:172","msg":"trace[1460275697] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1465; }","duration":"121.507541ms","start":"2025-11-21T23:49:31.321797Z","end":"2025-11-21T23:49:31.443305Z","steps":["trace[1460275697] 'agreement among raft nodes before linearized reading'  (duration: 121.416565ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-21T23:49:31.444122Z","caller":"traceutil/trace.go:172","msg":"trace[1873839518] transaction","detail":"{read_only:false; response_revision:1466; number_of_response:1; }","duration":"152.736081ms","start":"2025-11-21T23:49:31.291375Z","end":"2025-11-21T23:49:31.444111Z","steps":["trace[1873839518] 'process raft request'  (duration: 152.523387ms)"],"step_count":1}
	
	
	==> kernel <==
	 23:51:46 up 4 min,  0 users,  load average: 0.48, 1.30, 0.69
	Linux addons-266876 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Nov 19 01:10:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [9b2349c8754b0dab18ba46ec5bfeab82bbf19463f511389a62129f35aebcece7] <==
	W1121 23:47:42.366369       1 logging.go:55] [core] [Channel #259 SubChannel #260]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1121 23:47:42.410842       1 logging.go:55] [core] [Channel #263 SubChannel #264]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1121 23:47:42.649275       1 alloc.go:328] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.103.205.35"}
	I1121 23:47:44.226888       1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.96.203.27"}
	W1121 23:47:59.343614       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1121 23:47:59.366318       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1121 23:47:59.513667       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1121 23:47:59.564438       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1121 23:48:11.667772       1 handler_proxy.go:99] no RequestInfo found in the context
	E1121 23:48:11.669231       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.6.129:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.6.129:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.6.129:443: connect: connection refused" logger="UnhandledError"
	E1121 23:48:11.670277       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1121 23:48:11.672393       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.6.129:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.6.129:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.6.129:443: connect: connection refused" logger="UnhandledError"
	E1121 23:48:11.677441       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.6.129:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.6.129:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.6.129:443: connect: connection refused" logger="UnhandledError"
	E1121 23:48:11.699611       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.6.129:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.6.129:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.6.129:443: connect: connection refused" logger="UnhandledError"
	I1121 23:48:11.830392       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1121 23:49:07.600969       1 conn.go:339] Error on socket receive: read tcp 192.168.39.50:8443->192.168.39.1:39632: use of closed network connection
	E1121 23:49:07.806030       1 conn.go:339] Error on socket receive: read tcp 192.168.39.50:8443->192.168.39.1:39646: use of closed network connection
	I1121 23:49:16.529402       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1121 23:49:16.732737       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.100.151.240"}
	I1121 23:49:17.182251       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.97.21.116"}
	I1121 23:50:12.699613       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1121 23:51:44.509660       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.111.217.27"}
	
	
	==> kube-controller-manager [3a216f1821ac9afeecc64f730374536676eea6c11ca4c8f9bde7913e3b74b219] <==
	I1121 23:47:29.337483       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1121 23:47:29.337520       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1121 23:47:29.337577       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1121 23:47:29.337666       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1121 23:47:29.338254       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1121 23:47:29.338833       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1121 23:47:29.339107       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1121 23:47:29.340477       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1121 23:47:29.340506       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1121 23:47:29.341040       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1121 23:47:29.343803       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1121 23:47:29.357152       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1121 23:47:29.371508       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1121 23:47:37.577649       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1121 23:47:59.325689       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1121 23:47:59.326487       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1121 23:47:59.326701       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1121 23:47:59.433161       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1121 23:47:59.436132       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 23:47:59.460324       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1121 23:47:59.669783       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1121 23:49:21.118711       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gcp-auth"
	I1121 23:49:39.996446       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gadget"
	I1121 23:49:43.075346       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	I1121 23:49:50.779389       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	
	
	==> kube-proxy [9ba59e7c8953d9801a0002a71d6b0d76ceb61d8240d18301e56bb626f422fb40] <==
	I1121 23:47:31.549237       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1121 23:47:31.651147       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1121 23:47:31.651198       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.50"]
	E1121 23:47:31.651275       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1121 23:47:31.974605       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1121 23:47:31.975156       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1121 23:47:31.975763       1 server_linux.go:132] "Using iptables Proxier"
	I1121 23:47:32.024377       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1121 23:47:32.026629       1 server.go:527] "Version info" version="v1.34.1"
	I1121 23:47:32.026711       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 23:47:32.034053       1 config.go:200] "Starting service config controller"
	I1121 23:47:32.034241       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1121 23:47:32.034262       1 config.go:106] "Starting endpoint slice config controller"
	I1121 23:47:32.034266       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1121 23:47:32.034276       1 config.go:403] "Starting serviceCIDR config controller"
	I1121 23:47:32.034279       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1121 23:47:32.039494       1 config.go:309] "Starting node config controller"
	I1121 23:47:32.039506       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1121 23:47:32.039512       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1121 23:47:32.134526       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1121 23:47:32.134549       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1121 23:47:32.134580       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [8d89e7dd43a03bbcb90d35b0084800be3d656bfa6da9cea1a1cc88c7dc51493d] <==
	E1121 23:47:22.475530       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1121 23:47:22.475591       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1121 23:47:22.475644       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1121 23:47:22.475674       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1121 23:47:22.475781       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1121 23:47:22.475833       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1121 23:47:22.475877       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1121 23:47:22.476028       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1121 23:47:22.476096       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1121 23:47:23.318227       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1121 23:47:23.496497       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1121 23:47:23.525267       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1121 23:47:23.575530       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1121 23:47:23.578656       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1121 23:47:23.593013       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1121 23:47:23.593144       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1121 23:47:23.685009       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1121 23:47:23.695610       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1121 23:47:23.719024       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1121 23:47:23.735984       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1121 23:47:23.781311       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1121 23:47:23.797047       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1121 23:47:23.818758       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1121 23:47:23.836424       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I1121 23:47:26.255559       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 21 23:51:09 addons-266876 kubelet[1502]: E1121 23:51:09.776819    1502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="484e38f0-cbc8-4850-8360-07b1ea3e62a0"
	Nov 21 23:51:15 addons-266876 kubelet[1502]: E1121 23:51:15.842612    1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763769075842103086  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:532066}  inodes_used:{value:186}}"
	Nov 21 23:51:15 addons-266876 kubelet[1502]: E1121 23:51:15.842636    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763769075842103086  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:532066}  inodes_used:{value:186}}"
	Nov 21 23:51:25 addons-266876 kubelet[1502]: E1121 23:51:25.845995    1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763769085845309750  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:532066}  inodes_used:{value:186}}"
	Nov 21 23:51:25 addons-266876 kubelet[1502]: E1121 23:51:25.846044    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763769085845309750  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:532066}  inodes_used:{value:186}}"
	Nov 21 23:51:33 addons-266876 kubelet[1502]: I1121 23:51:33.400743    1502 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-pd4sx" secret="" err="secret \"gcp-auth\" not found"
	Nov 21 23:51:35 addons-266876 kubelet[1502]: E1121 23:51:35.848700    1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763769095848211281  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:532066}  inodes_used:{value:186}}"
	Nov 21 23:51:35 addons-266876 kubelet[1502]: E1121 23:51:35.849180    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763769095848211281  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:532066}  inodes_used:{value:186}}"
	Nov 21 23:51:39 addons-266876 kubelet[1502]: E1121 23:51:39.782023    1502 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Nov 21 23:51:39 addons-266876 kubelet[1502]: E1121 23:51:39.782100    1502 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Nov 21 23:51:39 addons-266876 kubelet[1502]: E1121 23:51:39.782332    1502 kuberuntime_manager.go:1449] "Unhandled Error" err="container helper-pod start failed in pod helper-pod-create-pvc-b9a0f343-b333-4fad-87d0-620f3a86218a_local-path-storage(2b7ac9f2-8e81-4c27-893d-6fb9ca3d4beb): ErrImagePull: reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Nov 21 23:51:39 addons-266876 kubelet[1502]: E1121 23:51:39.782373    1502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ErrImagePull: \"reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-b9a0f343-b333-4fad-87d0-620f3a86218a" podUID="2b7ac9f2-8e81-4c27-893d-6fb9ca3d4beb"
	Nov 21 23:51:40 addons-266876 kubelet[1502]: I1121 23:51:40.161405    1502 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/2b7ac9f2-8e81-4c27-893d-6fb9ca3d4beb-script\") pod \"2b7ac9f2-8e81-4c27-893d-6fb9ca3d4beb\" (UID: \"2b7ac9f2-8e81-4c27-893d-6fb9ca3d4beb\") "
	Nov 21 23:51:40 addons-266876 kubelet[1502]: I1121 23:51:40.161460    1502 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68x6v\" (UniqueName: \"kubernetes.io/projected/2b7ac9f2-8e81-4c27-893d-6fb9ca3d4beb-kube-api-access-68x6v\") pod \"2b7ac9f2-8e81-4c27-893d-6fb9ca3d4beb\" (UID: \"2b7ac9f2-8e81-4c27-893d-6fb9ca3d4beb\") "
	Nov 21 23:51:40 addons-266876 kubelet[1502]: I1121 23:51:40.161479    1502 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/2b7ac9f2-8e81-4c27-893d-6fb9ca3d4beb-data\") pod \"2b7ac9f2-8e81-4c27-893d-6fb9ca3d4beb\" (UID: \"2b7ac9f2-8e81-4c27-893d-6fb9ca3d4beb\") "
	Nov 21 23:51:40 addons-266876 kubelet[1502]: I1121 23:51:40.161602    1502 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b7ac9f2-8e81-4c27-893d-6fb9ca3d4beb-data" (OuterVolumeSpecName: "data") pod "2b7ac9f2-8e81-4c27-893d-6fb9ca3d4beb" (UID: "2b7ac9f2-8e81-4c27-893d-6fb9ca3d4beb"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Nov 21 23:51:40 addons-266876 kubelet[1502]: I1121 23:51:40.162285    1502 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b7ac9f2-8e81-4c27-893d-6fb9ca3d4beb-script" (OuterVolumeSpecName: "script") pod "2b7ac9f2-8e81-4c27-893d-6fb9ca3d4beb" (UID: "2b7ac9f2-8e81-4c27-893d-6fb9ca3d4beb"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	Nov 21 23:51:40 addons-266876 kubelet[1502]: I1121 23:51:40.164734    1502 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b7ac9f2-8e81-4c27-893d-6fb9ca3d4beb-kube-api-access-68x6v" (OuterVolumeSpecName: "kube-api-access-68x6v") pod "2b7ac9f2-8e81-4c27-893d-6fb9ca3d4beb" (UID: "2b7ac9f2-8e81-4c27-893d-6fb9ca3d4beb"). InnerVolumeSpecName "kube-api-access-68x6v". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 21 23:51:40 addons-266876 kubelet[1502]: I1121 23:51:40.261911    1502 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-68x6v\" (UniqueName: \"kubernetes.io/projected/2b7ac9f2-8e81-4c27-893d-6fb9ca3d4beb-kube-api-access-68x6v\") on node \"addons-266876\" DevicePath \"\""
	Nov 21 23:51:40 addons-266876 kubelet[1502]: I1121 23:51:40.262029    1502 reconciler_common.go:299] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/2b7ac9f2-8e81-4c27-893d-6fb9ca3d4beb-data\") on node \"addons-266876\" DevicePath \"\""
	Nov 21 23:51:40 addons-266876 kubelet[1502]: I1121 23:51:40.262040    1502 reconciler_common.go:299] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/2b7ac9f2-8e81-4c27-893d-6fb9ca3d4beb-script\") on node \"addons-266876\" DevicePath \"\""
	Nov 21 23:51:41 addons-266876 kubelet[1502]: I1121 23:51:41.405192    1502 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b7ac9f2-8e81-4c27-893d-6fb9ca3d4beb" path="/var/lib/kubelet/pods/2b7ac9f2-8e81-4c27-893d-6fb9ca3d4beb/volumes"
	Nov 21 23:51:44 addons-266876 kubelet[1502]: I1121 23:51:44.498696    1502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhdwl\" (UniqueName: \"kubernetes.io/projected/06b9a800-a9fc-4174-8e6f-34e5c7b7563b-kube-api-access-dhdwl\") pod \"hello-world-app-5d498dc89-sqvxb\" (UID: \"06b9a800-a9fc-4174-8e6f-34e5c7b7563b\") " pod="default/hello-world-app-5d498dc89-sqvxb"
	Nov 21 23:51:45 addons-266876 kubelet[1502]: E1121 23:51:45.853093    1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763769105852419863  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:532066}  inodes_used:{value:186}}"
	Nov 21 23:51:45 addons-266876 kubelet[1502]: E1121 23:51:45.853351    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763769105852419863  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:532066}  inodes_used:{value:186}}"
	
	
	==> storage-provisioner [62fac18e2a4ec25dc356850c017cf9e035abf1bb8a3d580cf0053ef797061409] <==
	W1121 23:51:20.738101       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:51:22.741403       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:51:22.747061       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:51:24.752110       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:51:24.761356       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:51:26.765317       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:51:26.770812       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:51:28.774285       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:51:28.778844       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:51:30.783252       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:51:30.792389       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:51:32.796502       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:51:32.805042       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:51:34.809701       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:51:34.815367       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:51:36.819307       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:51:36.825137       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:51:38.828306       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:51:38.836273       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:51:40.840865       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:51:40.849438       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:51:42.853477       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:51:42.861312       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:51:44.869590       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:51:44.884265       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-266876 -n addons-266876
helpers_test.go:269: (dbg) Run:  kubectl --context addons-266876 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-sqvxb task-pv-pod test-local-path ingress-nginx-admission-create-xq799 ingress-nginx-admission-patch-ht8dl
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-266876 describe pod hello-world-app-5d498dc89-sqvxb task-pv-pod test-local-path ingress-nginx-admission-create-xq799 ingress-nginx-admission-patch-ht8dl
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-266876 describe pod hello-world-app-5d498dc89-sqvxb task-pv-pod test-local-path ingress-nginx-admission-create-xq799 ingress-nginx-admission-patch-ht8dl: exit status 1 (97.09513ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-sqvxb
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-266876/192.168.39.50
	Start Time:       Fri, 21 Nov 2025 23:51:44 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dhdwl (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-dhdwl:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-sqvxb to addons-266876
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-266876/192.168.39.50
	Start Time:       Fri, 21 Nov 2025 23:49:41 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.29
	IPs:
	  IP:  10.244.0.29
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP (http-server)
	    Host Port:      0/TCP (http-server)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cj5dd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-cj5dd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  2m5s                default-scheduler  Successfully assigned default/task-pv-pod to addons-266876
	  Warning  Failed     37s                 kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     37s                 kubelet            Error: ErrImagePull
	  Normal   BackOff    37s                 kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     37s                 kubelet            Error: ImagePullBackOff
	  Normal   Pulling    22s (x2 over 2m5s)  kubelet            Pulling image "docker.io/nginx"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-24fvr (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-24fvr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-xq799" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-ht8dl" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-266876 describe pod hello-world-app-5d498dc89-sqvxb task-pv-pod test-local-path ingress-nginx-admission-create-xq799 ingress-nginx-admission-patch-ht8dl: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-266876 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-266876 addons disable ingress-dns --alsologtostderr -v=1: (1.183279152s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-266876 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-266876 addons disable ingress --alsologtostderr -v=1: (7.816361415s)
--- FAIL: TestAddons/parallel/Ingress (159.73s)

                                                
                                    
x
+
TestAddons/parallel/CSI (372.49s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1121 23:49:38.828704  250664 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1121 23:49:38.845149  250664 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1121 23:49:38.845195  250664 kapi.go:107] duration metric: took 16.52461ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 16.5439ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-266876 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-266876 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [484e38f0-cbc8-4850-8360-07b1ea3e62a0] Pending
helpers_test.go:352: "task-pv-pod" [484e38f0-cbc8-4850-8360-07b1ea3e62a0] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:337: TestAddons/parallel/CSI: WARNING: pod list for "default" "app=task-pv-pod" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:567: ***** TestAddons/parallel/CSI: pod "app=task-pv-pod" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:567: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-266876 -n addons-266876
addons_test.go:567: TestAddons/parallel/CSI: showing logs for failed pods as of 2025-11-21 23:55:41.436722368 +0000 UTC m=+546.301014061
addons_test.go:567: (dbg) Run:  kubectl --context addons-266876 describe po task-pv-pod -n default
addons_test.go:567: (dbg) kubectl --context addons-266876 describe po task-pv-pod -n default:
Name:             task-pv-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-266876/192.168.39.50
Start Time:       Fri, 21 Nov 2025 23:49:41 +0000
Labels:           app=task-pv-pod
Annotations:      <none>
Status:           Pending
IP:               10.244.0.29
IPs:
IP:  10.244.0.29
Containers:
task-pv-container:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           80/TCP (http-server)
Host Port:      0/TCP (http-server)
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/usr/share/nginx/html from task-pv-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cj5dd (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
task-pv-storage:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  hpvc
ReadOnly:   false
kube-api-access-cj5dd:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  6m                    default-scheduler  Successfully assigned default/task-pv-pod to addons-266876
Warning  Failed     4m32s                 kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     107s (x3 over 4m32s)  kubelet            Error: ErrImagePull
Warning  Failed     107s (x2 over 3m32s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    80s (x4 over 4m32s)   kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     80s (x4 over 4m32s)   kubelet            Error: ImagePullBackOff
Normal   Pulling    65s (x4 over 6m)      kubelet            Pulling image "docker.io/nginx"
addons_test.go:567: (dbg) Run:  kubectl --context addons-266876 logs task-pv-pod -n default
addons_test.go:567: (dbg) Non-zero exit: kubectl --context addons-266876 logs task-pv-pod -n default: exit status 1 (76.265339ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "task-pv-container" in pod "task-pv-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:567: kubectl --context addons-266876 logs task-pv-pod -n default: exit status 1
addons_test.go:568: failed waiting for pod task-pv-pod: app=task-pv-pod within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/CSI]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-266876 -n addons-266876
helpers_test.go:252: <<< TestAddons/parallel/CSI FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/CSI]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-266876 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-266876 logs -n 25: (1.168375672s)
helpers_test.go:260: TestAddons/parallel/CSI logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-246895                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-246895 │ jenkins │ v1.37.0 │ 21 Nov 25 23:46 UTC │ 21 Nov 25 23:46 UTC │
	│ delete  │ -p download-only-263491                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-263491 │ jenkins │ v1.37.0 │ 21 Nov 25 23:46 UTC │ 21 Nov 25 23:46 UTC │
	│ start   │ --download-only -p binary-mirror-996598 --alsologtostderr --binary-mirror http://127.0.0.1:41123 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-996598 │ jenkins │ v1.37.0 │ 21 Nov 25 23:46 UTC │                     │
	│ delete  │ -p binary-mirror-996598                                                                                                                                                                                                                                                                                                                                                                                                                                                │ binary-mirror-996598 │ jenkins │ v1.37.0 │ 21 Nov 25 23:46 UTC │ 21 Nov 25 23:46 UTC │
	│ addons  │ enable dashboard -p addons-266876                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-266876        │ jenkins │ v1.37.0 │ 21 Nov 25 23:46 UTC │                     │
	│ addons  │ disable dashboard -p addons-266876                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-266876        │ jenkins │ v1.37.0 │ 21 Nov 25 23:46 UTC │                     │
	│ start   │ -p addons-266876 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-266876        │ jenkins │ v1.37.0 │ 21 Nov 25 23:46 UTC │ 21 Nov 25 23:48 UTC │
	│ addons  │ addons-266876 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-266876        │ jenkins │ v1.37.0 │ 21 Nov 25 23:48 UTC │ 21 Nov 25 23:48 UTC │
	│ addons  │ addons-266876 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-266876        │ jenkins │ v1.37.0 │ 21 Nov 25 23:49 UTC │ 21 Nov 25 23:49 UTC │
	│ addons  │ enable headlamp -p addons-266876 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-266876        │ jenkins │ v1.37.0 │ 21 Nov 25 23:49 UTC │ 21 Nov 25 23:49 UTC │
	│ addons  │ addons-266876 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-266876        │ jenkins │ v1.37.0 │ 21 Nov 25 23:49 UTC │ 21 Nov 25 23:49 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-266876                                                                                                                                                                                                                                                                                                                                                                                         │ addons-266876        │ jenkins │ v1.37.0 │ 21 Nov 25 23:49 UTC │ 21 Nov 25 23:49 UTC │
	│ addons  │ addons-266876 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-266876        │ jenkins │ v1.37.0 │ 21 Nov 25 23:49 UTC │ 21 Nov 25 23:49 UTC │
	│ ssh     │ addons-266876 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                               │ addons-266876        │ jenkins │ v1.37.0 │ 21 Nov 25 23:49 UTC │                     │
	│ addons  │ addons-266876 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-266876        │ jenkins │ v1.37.0 │ 21 Nov 25 23:49 UTC │ 21 Nov 25 23:49 UTC │
	│ ip      │ addons-266876 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-266876        │ jenkins │ v1.37.0 │ 21 Nov 25 23:49 UTC │ 21 Nov 25 23:49 UTC │
	│ addons  │ addons-266876 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-266876        │ jenkins │ v1.37.0 │ 21 Nov 25 23:49 UTC │ 21 Nov 25 23:49 UTC │
	│ addons  │ addons-266876 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-266876        │ jenkins │ v1.37.0 │ 21 Nov 25 23:49 UTC │ 21 Nov 25 23:49 UTC │
	│ addons  │ addons-266876 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                               │ addons-266876        │ jenkins │ v1.37.0 │ 21 Nov 25 23:49 UTC │ 21 Nov 25 23:49 UTC │
	│ addons  │ addons-266876 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-266876        │ jenkins │ v1.37.0 │ 21 Nov 25 23:49 UTC │ 21 Nov 25 23:49 UTC │
	│ addons  │ addons-266876 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-266876        │ jenkins │ v1.37.0 │ 21 Nov 25 23:49 UTC │ 21 Nov 25 23:49 UTC │
	│ ip      │ addons-266876 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-266876        │ jenkins │ v1.37.0 │ 21 Nov 25 23:51 UTC │ 21 Nov 25 23:51 UTC │
	│ addons  │ addons-266876 addons disable ingress-dns --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-266876        │ jenkins │ v1.37.0 │ 21 Nov 25 23:51 UTC │ 21 Nov 25 23:51 UTC │
	│ addons  │ addons-266876 addons disable ingress --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-266876        │ jenkins │ v1.37.0 │ 21 Nov 25 23:51 UTC │ 21 Nov 25 23:51 UTC │
	│ addons  │ addons-266876 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                        │ addons-266876        │ jenkins │ v1.37.0 │ 21 Nov 25 23:54 UTC │ 21 Nov 25 23:54 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 23:46:48
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 23:46:48.131095  251263 out.go:360] Setting OutFile to fd 1 ...
	I1121 23:46:48.131340  251263 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:46:48.131350  251263 out.go:374] Setting ErrFile to fd 2...
	I1121 23:46:48.131354  251263 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:46:48.131528  251263 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-244751/.minikube/bin
	I1121 23:46:48.132085  251263 out.go:368] Setting JSON to false
	I1121 23:46:48.132905  251263 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":26936,"bootTime":1763741872,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1121 23:46:48.132973  251263 start.go:143] virtualization: kvm guest
	I1121 23:46:48.134971  251263 out.go:179] * [addons-266876] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1121 23:46:48.136184  251263 notify.go:221] Checking for updates...
	I1121 23:46:48.136230  251263 out.go:179]   - MINIKUBE_LOCATION=21934
	I1121 23:46:48.137505  251263 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 23:46:48.138918  251263 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-244751/kubeconfig
	I1121 23:46:48.140232  251263 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-244751/.minikube
	I1121 23:46:48.141364  251263 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1121 23:46:48.142744  251263 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 23:46:48.144346  251263 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 23:46:48.178112  251263 out.go:179] * Using the kvm2 driver based on user configuration
	I1121 23:46:48.179144  251263 start.go:309] selected driver: kvm2
	I1121 23:46:48.179156  251263 start.go:930] validating driver "kvm2" against <nil>
	I1121 23:46:48.179168  251263 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 23:46:48.179919  251263 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1121 23:46:48.180166  251263 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 23:46:48.180191  251263 cni.go:84] Creating CNI manager for ""
	I1121 23:46:48.180267  251263 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1121 23:46:48.180276  251263 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1121 23:46:48.180323  251263 start.go:353] cluster config:
	{Name:addons-266876 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-266876 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1121 23:46:48.180438  251263 iso.go:125] acquiring lock: {Name:mkc83d3435f1eaa5a92358fc78f85b7d74048deb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 23:46:48.181860  251263 out.go:179] * Starting "addons-266876" primary control-plane node in "addons-266876" cluster
	I1121 23:46:48.182929  251263 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 23:46:48.182959  251263 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21934-244751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1121 23:46:48.182976  251263 cache.go:65] Caching tarball of preloaded images
	I1121 23:46:48.183059  251263 preload.go:238] Found /home/jenkins/minikube-integration/21934-244751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1121 23:46:48.183069  251263 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1121 23:46:48.183354  251263 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/config.json ...
	I1121 23:46:48.183376  251263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/config.json: {Name:mk0295453cd01463fa22b5d6c7388981c204c24d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:46:48.183507  251263 start.go:360] acquireMachinesLock for addons-266876: {Name:mk0193f6f5636a08cc7939c91649c5870e7698fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1121 23:46:48.183552  251263 start.go:364] duration metric: took 33.297µs to acquireMachinesLock for "addons-266876"
	I1121 23:46:48.183570  251263 start.go:93] Provisioning new machine with config: &{Name:addons-266876 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-266876 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1121 23:46:48.183614  251263 start.go:125] createHost starting for "" (driver="kvm2")
	I1121 23:46:48.185254  251263 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1121 23:46:48.185412  251263 start.go:159] libmachine.API.Create for "addons-266876" (driver="kvm2")
	I1121 23:46:48.185441  251263 client.go:173] LocalClient.Create starting
	I1121 23:46:48.185543  251263 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21934-244751/.minikube/certs/ca.pem
	I1121 23:46:48.249364  251263 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21934-244751/.minikube/certs/cert.pem
	I1121 23:46:48.566610  251263 main.go:143] libmachine: creating domain...
	I1121 23:46:48.566636  251263 main.go:143] libmachine: creating network...
	I1121 23:46:48.568191  251263 main.go:143] libmachine: found existing default network
	I1121 23:46:48.568404  251263 main.go:143] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1121 23:46:48.568892  251263 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e90440}
	I1121 23:46:48.569009  251263 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-266876</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1121 23:46:48.575044  251263 main.go:143] libmachine: creating private network mk-addons-266876 192.168.39.0/24...
	I1121 23:46:48.645727  251263 main.go:143] libmachine: private network mk-addons-266876 192.168.39.0/24 created
	I1121 23:46:48.646042  251263 main.go:143] libmachine: <network>
	  <name>mk-addons-266876</name>
	  <uuid>c503bc44-d3ea-47cf-b120-da4593d18380</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:80:0f:c2'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1121 23:46:48.646078  251263 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876 ...
	I1121 23:46:48.646103  251263 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/21934-244751/.minikube/cache/iso/amd64/minikube-v1.37.0-1763503576-21924-amd64.iso
	I1121 23:46:48.646114  251263 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/21934-244751/.minikube
	I1121 23:46:48.646192  251263 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/21934-244751/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21934-244751/.minikube/cache/iso/amd64/minikube-v1.37.0-1763503576-21924-amd64.iso...
	I1121 23:46:48.924945  251263 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa...
	I1121 23:46:48.947251  251263 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/addons-266876.rawdisk...
	I1121 23:46:48.947299  251263 main.go:143] libmachine: Writing magic tar header
	I1121 23:46:48.947321  251263 main.go:143] libmachine: Writing SSH key tar header
	I1121 23:46:48.947404  251263 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876 ...
	I1121 23:46:48.947463  251263 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876
	I1121 23:46:48.947488  251263 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876 (perms=drwx------)
	I1121 23:46:48.947500  251263 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21934-244751/.minikube/machines
	I1121 23:46:48.947510  251263 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21934-244751/.minikube/machines (perms=drwxr-xr-x)
	I1121 23:46:48.947521  251263 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21934-244751/.minikube
	I1121 23:46:48.947528  251263 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21934-244751/.minikube (perms=drwxr-xr-x)
	I1121 23:46:48.947540  251263 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21934-244751
	I1121 23:46:48.947549  251263 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21934-244751 (perms=drwxrwxr-x)
	I1121 23:46:48.947562  251263 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1121 23:46:48.947572  251263 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1121 23:46:48.947579  251263 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1121 23:46:48.947589  251263 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1121 23:46:48.947600  251263 main.go:143] libmachine: checking permissions on dir: /home
	I1121 23:46:48.947606  251263 main.go:143] libmachine: skipping /home - not owner
	I1121 23:46:48.947613  251263 main.go:143] libmachine: defining domain...
	I1121 23:46:48.949155  251263 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-266876</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/addons-266876.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-266876'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1121 23:46:48.954504  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:cb:01:39 in network default
	I1121 23:46:48.955203  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:46:48.955226  251263 main.go:143] libmachine: starting domain...
	I1121 23:46:48.955230  251263 main.go:143] libmachine: ensuring networks are active...
	I1121 23:46:48.956075  251263 main.go:143] libmachine: Ensuring network default is active
	I1121 23:46:48.956468  251263 main.go:143] libmachine: Ensuring network mk-addons-266876 is active
	I1121 23:46:48.957054  251263 main.go:143] libmachine: getting domain XML...
	I1121 23:46:48.958124  251263 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-266876</name>
	  <uuid>c4a95d5c-2715-4bec-8bc2-a50909bf4217</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/addons-266876.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:ab:5a:31'/>
	      <source network='mk-addons-266876'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:cb:01:39'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1121 23:46:50.230732  251263 main.go:143] libmachine: waiting for domain to start...
	I1121 23:46:50.232398  251263 main.go:143] libmachine: domain is now running
	I1121 23:46:50.232423  251263 main.go:143] libmachine: waiting for IP...
	I1121 23:46:50.233366  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:46:50.234245  251263 main.go:143] libmachine: no network interface addresses found for domain addons-266876 (source=lease)
	I1121 23:46:50.234266  251263 main.go:143] libmachine: trying to list again with source=arp
	I1121 23:46:50.234594  251263 main.go:143] libmachine: unable to find current IP address of domain addons-266876 in network mk-addons-266876 (interfaces detected: [])
	I1121 23:46:50.234654  251263 retry.go:31] will retry after 291.794239ms: waiting for domain to come up
	I1121 23:46:50.528283  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:46:50.528971  251263 main.go:143] libmachine: no network interface addresses found for domain addons-266876 (source=lease)
	I1121 23:46:50.528987  251263 main.go:143] libmachine: trying to list again with source=arp
	I1121 23:46:50.529342  251263 main.go:143] libmachine: unable to find current IP address of domain addons-266876 in network mk-addons-266876 (interfaces detected: [])
	I1121 23:46:50.529380  251263 retry.go:31] will retry after 351.305248ms: waiting for domain to come up
	I1121 23:46:50.882166  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:46:50.883099  251263 main.go:143] libmachine: no network interface addresses found for domain addons-266876 (source=lease)
	I1121 23:46:50.883122  251263 main.go:143] libmachine: trying to list again with source=arp
	I1121 23:46:50.883485  251263 main.go:143] libmachine: unable to find current IP address of domain addons-266876 in network mk-addons-266876 (interfaces detected: [])
	I1121 23:46:50.883531  251263 retry.go:31] will retry after 364.129033ms: waiting for domain to come up
	I1121 23:46:51.249389  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:46:51.250192  251263 main.go:143] libmachine: no network interface addresses found for domain addons-266876 (source=lease)
	I1121 23:46:51.250210  251263 main.go:143] libmachine: trying to list again with source=arp
	I1121 23:46:51.250511  251263 main.go:143] libmachine: unable to find current IP address of domain addons-266876 in network mk-addons-266876 (interfaces detected: [])
	I1121 23:46:51.250562  251263 retry.go:31] will retry after 385.747401ms: waiting for domain to come up
	I1121 23:46:51.638320  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:46:51.639301  251263 main.go:143] libmachine: no network interface addresses found for domain addons-266876 (source=lease)
	I1121 23:46:51.639319  251263 main.go:143] libmachine: trying to list again with source=arp
	I1121 23:46:51.639704  251263 main.go:143] libmachine: unable to find current IP address of domain addons-266876 in network mk-addons-266876 (interfaces detected: [])
	I1121 23:46:51.639759  251263 retry.go:31] will retry after 745.315642ms: waiting for domain to come up
	I1121 23:46:52.386579  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:46:52.387430  251263 main.go:143] libmachine: no network interface addresses found for domain addons-266876 (source=lease)
	I1121 23:46:52.387444  251263 main.go:143] libmachine: trying to list again with source=arp
	I1121 23:46:52.387845  251263 main.go:143] libmachine: unable to find current IP address of domain addons-266876 in network mk-addons-266876 (interfaces detected: [])
	I1121 23:46:52.387891  251263 retry.go:31] will retry after 692.465755ms: waiting for domain to come up
	I1121 23:46:53.081995  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:46:53.082882  251263 main.go:143] libmachine: no network interface addresses found for domain addons-266876 (source=lease)
	I1121 23:46:53.082899  251263 main.go:143] libmachine: trying to list again with source=arp
	I1121 23:46:53.083254  251263 main.go:143] libmachine: unable to find current IP address of domain addons-266876 in network mk-addons-266876 (interfaces detected: [])
	I1121 23:46:53.083289  251263 retry.go:31] will retry after 879.261574ms: waiting for domain to come up
	I1121 23:46:53.964041  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:46:53.964752  251263 main.go:143] libmachine: no network interface addresses found for domain addons-266876 (source=lease)
	I1121 23:46:53.964779  251263 main.go:143] libmachine: trying to list again with source=arp
	I1121 23:46:53.965086  251263 main.go:143] libmachine: unable to find current IP address of domain addons-266876 in network mk-addons-266876 (interfaces detected: [])
	I1121 23:46:53.965141  251263 retry.go:31] will retry after 1.461085566s: waiting for domain to come up
	I1121 23:46:55.428870  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:46:55.429589  251263 main.go:143] libmachine: no network interface addresses found for domain addons-266876 (source=lease)
	I1121 23:46:55.429605  251263 main.go:143] libmachine: trying to list again with source=arp
	I1121 23:46:55.429939  251263 main.go:143] libmachine: unable to find current IP address of domain addons-266876 in network mk-addons-266876 (interfaces detected: [])
	I1121 23:46:55.429981  251263 retry.go:31] will retry after 1.78072773s: waiting for domain to come up
	I1121 23:46:57.213143  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:46:57.213941  251263 main.go:143] libmachine: no network interface addresses found for domain addons-266876 (source=lease)
	I1121 23:46:57.213961  251263 main.go:143] libmachine: trying to list again with source=arp
	I1121 23:46:57.214320  251263 main.go:143] libmachine: unable to find current IP address of domain addons-266876 in network mk-addons-266876 (interfaces detected: [])
	I1121 23:46:57.214355  251263 retry.go:31] will retry after 1.504173315s: waiting for domain to come up
	I1121 23:46:58.719849  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:46:58.720746  251263 main.go:143] libmachine: no network interface addresses found for domain addons-266876 (source=lease)
	I1121 23:46:58.720770  251263 main.go:143] libmachine: trying to list again with source=arp
	I1121 23:46:58.721137  251263 main.go:143] libmachine: unable to find current IP address of domain addons-266876 in network mk-addons-266876 (interfaces detected: [])
	I1121 23:46:58.721173  251263 retry.go:31] will retry after 2.875642747s: waiting for domain to come up
	I1121 23:47:01.600296  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:01.600945  251263 main.go:143] libmachine: no network interface addresses found for domain addons-266876 (source=lease)
	I1121 23:47:01.600961  251263 main.go:143] libmachine: trying to list again with source=arp
	I1121 23:47:01.601274  251263 main.go:143] libmachine: unable to find current IP address of domain addons-266876 in network mk-addons-266876 (interfaces detected: [])
	I1121 23:47:01.601321  251263 retry.go:31] will retry after 3.623260763s: waiting for domain to come up
	I1121 23:47:05.227711  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:05.228458  251263 main.go:143] libmachine: domain addons-266876 has current primary IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:05.228475  251263 main.go:143] libmachine: found domain IP: 192.168.39.50
	I1121 23:47:05.228486  251263 main.go:143] libmachine: reserving static IP address...
	I1121 23:47:05.229043  251263 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-266876", mac: "52:54:00:ab:5a:31", ip: "192.168.39.50"} in network mk-addons-266876
	I1121 23:47:05.530130  251263 main.go:143] libmachine: reserved static IP address 192.168.39.50 for domain addons-266876
	I1121 23:47:05.530160  251263 main.go:143] libmachine: waiting for SSH...
	I1121 23:47:05.530169  251263 main.go:143] libmachine: Getting to WaitForSSH function...
	I1121 23:47:05.533988  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:05.534529  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:05.534565  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:05.534795  251263 main.go:143] libmachine: Using SSH client type: native
	I1121 23:47:05.535088  251263 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I1121 23:47:05.535104  251263 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1121 23:47:05.657550  251263 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1121 23:47:05.657963  251263 main.go:143] libmachine: domain creation complete
	I1121 23:47:05.659772  251263 machine.go:94] provisionDockerMachine start ...
	I1121 23:47:05.662740  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:05.663237  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:05.663263  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:05.663525  251263 main.go:143] libmachine: Using SSH client type: native
	I1121 23:47:05.663805  251263 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I1121 23:47:05.663820  251263 main.go:143] libmachine: About to run SSH command:
	hostname
	I1121 23:47:05.773778  251263 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1121 23:47:05.773809  251263 buildroot.go:166] provisioning hostname "addons-266876"
	I1121 23:47:05.777397  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:05.777855  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:05.777881  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:05.778090  251263 main.go:143] libmachine: Using SSH client type: native
	I1121 23:47:05.778347  251263 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I1121 23:47:05.778362  251263 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-266876 && echo "addons-266876" | sudo tee /etc/hostname
	I1121 23:47:05.904549  251263 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-266876
	
	I1121 23:47:05.907947  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:05.908399  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:05.908428  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:05.908637  251263 main.go:143] libmachine: Using SSH client type: native
	I1121 23:47:05.908909  251263 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I1121 23:47:05.908934  251263 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-266876' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-266876/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-266876' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1121 23:47:06.027505  251263 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1121 23:47:06.027542  251263 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21934-244751/.minikube CaCertPath:/home/jenkins/minikube-integration/21934-244751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21934-244751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21934-244751/.minikube}
	I1121 23:47:06.027606  251263 buildroot.go:174] setting up certificates
	I1121 23:47:06.027620  251263 provision.go:84] configureAuth start
	I1121 23:47:06.030823  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:06.031234  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:06.031255  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:06.033405  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:06.033742  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:06.033761  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:06.033873  251263 provision.go:143] copyHostCerts
	I1121 23:47:06.033958  251263 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-244751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21934-244751/.minikube/key.pem (1679 bytes)
	I1121 23:47:06.034087  251263 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-244751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21934-244751/.minikube/ca.pem (1078 bytes)
	I1121 23:47:06.034147  251263 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-244751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21934-244751/.minikube/cert.pem (1123 bytes)
	I1121 23:47:06.034206  251263 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21934-244751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21934-244751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21934-244751/.minikube/certs/ca-key.pem org=jenkins.addons-266876 san=[127.0.0.1 192.168.39.50 addons-266876 localhost minikube]
	I1121 23:47:06.088178  251263 provision.go:177] copyRemoteCerts
	I1121 23:47:06.088255  251263 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1121 23:47:06.090836  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:06.091229  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:06.091259  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:06.091419  251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
	I1121 23:47:06.177697  251263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1121 23:47:06.208945  251263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1121 23:47:06.240002  251263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1121 23:47:06.271424  251263 provision.go:87] duration metric: took 243.786645ms to configureAuth
	I1121 23:47:06.271463  251263 buildroot.go:189] setting minikube options for container-runtime
	I1121 23:47:06.271718  251263 config.go:182] Loaded profile config "addons-266876": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:47:06.275170  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:06.275691  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:06.275730  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:06.276021  251263 main.go:143] libmachine: Using SSH client type: native
	I1121 23:47:06.276275  251263 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I1121 23:47:06.276292  251263 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1121 23:47:06.522993  251263 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1121 23:47:06.523024  251263 machine.go:97] duration metric: took 863.230308ms to provisionDockerMachine
	I1121 23:47:06.523034  251263 client.go:176] duration metric: took 18.337586387s to LocalClient.Create
	I1121 23:47:06.523056  251263 start.go:167] duration metric: took 18.337642424s to libmachine.API.Create "addons-266876"
	I1121 23:47:06.523067  251263 start.go:293] postStartSetup for "addons-266876" (driver="kvm2")
	I1121 23:47:06.523080  251263 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1121 23:47:06.523174  251263 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1121 23:47:06.526182  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:06.526662  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:06.526701  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:06.526857  251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
	I1121 23:47:06.616570  251263 ssh_runner.go:195] Run: cat /etc/os-release
	I1121 23:47:06.622182  251263 info.go:137] Remote host: Buildroot 2025.02
	I1121 23:47:06.622217  251263 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-244751/.minikube/addons for local assets ...
	I1121 23:47:06.622288  251263 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-244751/.minikube/files for local assets ...
	I1121 23:47:06.622311  251263 start.go:296] duration metric: took 99.238343ms for postStartSetup
	I1121 23:47:06.625431  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:06.626043  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:06.626079  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:06.626664  251263 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/config.json ...
	I1121 23:47:06.626937  251263 start.go:128] duration metric: took 18.44331085s to createHost
	I1121 23:47:06.629842  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:06.630374  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:06.630404  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:06.630671  251263 main.go:143] libmachine: Using SSH client type: native
	I1121 23:47:06.630883  251263 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I1121 23:47:06.630893  251263 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1121 23:47:06.742838  251263 main.go:143] libmachine: SSH cmd err, output: <nil>: 1763768826.701122136
	
	I1121 23:47:06.742869  251263 fix.go:216] guest clock: 1763768826.701122136
	I1121 23:47:06.742878  251263 fix.go:229] Guest: 2025-11-21 23:47:06.701122136 +0000 UTC Remote: 2025-11-21 23:47:06.626948375 +0000 UTC m=+18.545515405 (delta=74.173761ms)
	I1121 23:47:06.742897  251263 fix.go:200] guest clock delta is within tolerance: 74.173761ms
	I1121 23:47:06.742902  251263 start.go:83] releasing machines lock for "addons-266876", held for 18.559341059s
	I1121 23:47:06.745883  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:06.746295  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:06.746321  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:06.746833  251263 ssh_runner.go:195] Run: cat /version.json
	I1121 23:47:06.746947  251263 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1121 23:47:06.750243  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:06.750247  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:06.750776  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:06.750809  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:06.750823  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:06.750856  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:06.751031  251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
	I1121 23:47:06.751199  251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
	I1121 23:47:06.830906  251263 ssh_runner.go:195] Run: systemctl --version
	I1121 23:47:06.862977  251263 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1121 23:47:07.024839  251263 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1121 23:47:07.032647  251263 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1121 23:47:07.032771  251263 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1121 23:47:07.054527  251263 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1121 23:47:07.054564  251263 start.go:496] detecting cgroup driver to use...
	I1121 23:47:07.054645  251263 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1121 23:47:07.075688  251263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1121 23:47:07.094661  251263 docker.go:218] disabling cri-docker service (if available) ...
	I1121 23:47:07.094747  251263 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1121 23:47:07.112602  251263 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1121 23:47:07.129177  251263 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1121 23:47:07.274890  251263 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1121 23:47:07.492757  251263 docker.go:234] disabling docker service ...
	I1121 23:47:07.492831  251263 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1121 23:47:07.510021  251263 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1121 23:47:07.525620  251263 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1121 23:47:07.675935  251263 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1121 23:47:07.820400  251263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1121 23:47:07.837622  251263 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1121 23:47:07.861864  251263 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1121 23:47:07.861942  251263 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 23:47:07.875198  251263 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1121 23:47:07.875282  251263 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 23:47:07.889198  251263 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 23:47:07.902595  251263 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 23:47:07.915879  251263 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1121 23:47:07.929954  251263 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 23:47:07.943664  251263 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 23:47:07.965719  251263 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 23:47:07.978868  251263 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1121 23:47:07.991074  251263 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1121 23:47:07.991144  251263 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1121 23:47:08.015804  251263 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1121 23:47:08.029594  251263 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 23:47:08.172544  251263 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1121 23:47:08.286465  251263 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1121 23:47:08.286546  251263 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1121 23:47:08.292422  251263 start.go:564] Will wait 60s for crictl version
	I1121 23:47:08.292523  251263 ssh_runner.go:195] Run: which crictl
	I1121 23:47:08.297252  251263 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1121 23:47:08.333825  251263 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1121 23:47:08.333924  251263 ssh_runner.go:195] Run: crio --version
	I1121 23:47:08.364777  251263 ssh_runner.go:195] Run: crio --version
	I1121 23:47:08.397593  251263 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1121 23:47:08.401817  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:08.402315  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:08.402343  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:08.402614  251263 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1121 23:47:08.408058  251263 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 23:47:08.427560  251263 kubeadm.go:884] updating cluster {Name:addons-266876 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-266876 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.50 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1121 23:47:08.427708  251263 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 23:47:08.427752  251263 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 23:47:08.466046  251263 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1121 23:47:08.466131  251263 ssh_runner.go:195] Run: which lz4
	I1121 23:47:08.471268  251263 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1121 23:47:08.476699  251263 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1121 23:47:08.476733  251263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1121 23:47:10.046904  251263 crio.go:462] duration metric: took 1.575665951s to copy over tarball
	I1121 23:47:10.046997  251263 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1121 23:47:11.663077  251263 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.616046572s)
	I1121 23:47:11.663118  251263 crio.go:469] duration metric: took 1.616181048s to extract the tarball
	I1121 23:47:11.663129  251263 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1121 23:47:11.705893  251263 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 23:47:11.746467  251263 crio.go:514] all images are preloaded for cri-o runtime.
	I1121 23:47:11.746493  251263 cache_images.go:86] Images are preloaded, skipping loading
	I1121 23:47:11.746502  251263 kubeadm.go:935] updating node { 192.168.39.50 8443 v1.34.1 crio true true} ...
	I1121 23:47:11.746609  251263 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-266876 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.50
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-266876 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1121 23:47:11.746698  251263 ssh_runner.go:195] Run: crio config
	I1121 23:47:11.795708  251263 cni.go:84] Creating CNI manager for ""
	I1121 23:47:11.795739  251263 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1121 23:47:11.795759  251263 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1121 23:47:11.795781  251263 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.50 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-266876 NodeName:addons-266876 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.50"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.50 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1121 23:47:11.795901  251263 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.50
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-266876"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.50"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.50"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1121 23:47:11.795977  251263 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1121 23:47:11.808516  251263 binaries.go:51] Found k8s binaries, skipping transfer
	I1121 23:47:11.808581  251263 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1121 23:47:11.820622  251263 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1121 23:47:11.842831  251263 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1121 23:47:11.864556  251263 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1121 23:47:11.887018  251263 ssh_runner.go:195] Run: grep 192.168.39.50	control-plane.minikube.internal$ /etc/hosts
	I1121 23:47:11.891743  251263 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.50	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 23:47:11.907140  251263 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 23:47:12.050500  251263 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 23:47:12.084445  251263 certs.go:69] Setting up /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876 for IP: 192.168.39.50
	I1121 23:47:12.084477  251263 certs.go:195] generating shared ca certs ...
	I1121 23:47:12.084503  251263 certs.go:227] acquiring lock for ca certs: {Name:mk43fa762c6315605485300e07cf83f8f357f8dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:47:12.084733  251263 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21934-244751/.minikube/ca.key
	I1121 23:47:12.219080  251263 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-244751/.minikube/ca.crt ...
	I1121 23:47:12.219114  251263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-244751/.minikube/ca.crt: {Name:mk4ab860b5f00eeacc7d5a064e6b8682b8350cc0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:47:12.219328  251263 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-244751/.minikube/ca.key ...
	I1121 23:47:12.219350  251263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-244751/.minikube/ca.key: {Name:mkd33a6a072a0fb7cb39783adfcb9f792da25f35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:47:12.219466  251263 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21934-244751/.minikube/proxy-client-ca.key
	I1121 23:47:12.275894  251263 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-244751/.minikube/proxy-client-ca.crt ...
	I1121 23:47:12.275930  251263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-244751/.minikube/proxy-client-ca.crt: {Name:mk4874a4ae2a76e1a44a3b81a6402bcd1f4b9663 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:47:12.276126  251263 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-244751/.minikube/proxy-client-ca.key ...
	I1121 23:47:12.276145  251263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-244751/.minikube/proxy-client-ca.key: {Name:mk1d8c1db5a8f9f2ab09a6bc1211706c413d6bdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:47:12.276291  251263 certs.go:257] generating profile certs ...
	I1121 23:47:12.276376  251263 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/client.key
	I1121 23:47:12.276402  251263 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/client.crt with IP's: []
	I1121 23:47:12.405508  251263 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/client.crt ...
	I1121 23:47:12.405541  251263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/client.crt: {Name:mkcc0d2bdbfeba71ea1f4e63e41e1151d9d382ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:47:12.405791  251263 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/client.key ...
	I1121 23:47:12.405812  251263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/client.key: {Name:mk1d82213fc29dcec5419cdd18c321f7613a56e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:47:12.405953  251263 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/apiserver.key.8d7367ca
	I1121 23:47:12.405982  251263 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/apiserver.crt.8d7367ca with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.50]
	I1121 23:47:12.443135  251263 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/apiserver.crt.8d7367ca ...
	I1121 23:47:12.443162  251263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/apiserver.crt.8d7367ca: {Name:mk318161f2384c8556874dd6e6e5fc8eee5c9cdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:47:12.443363  251263 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/apiserver.key.8d7367ca ...
	I1121 23:47:12.443385  251263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/apiserver.key.8d7367ca: {Name:mke2fa439b03069f58550af68f202fe26e9c97ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:47:12.443489  251263 certs.go:382] copying /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/apiserver.crt.8d7367ca -> /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/apiserver.crt
	I1121 23:47:12.443595  251263 certs.go:386] copying /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/apiserver.key.8d7367ca -> /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/apiserver.key
	I1121 23:47:12.443670  251263 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/proxy-client.key
	I1121 23:47:12.443705  251263 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/proxy-client.crt with IP's: []
	I1121 23:47:12.603488  251263 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/proxy-client.crt ...
	I1121 23:47:12.603520  251263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/proxy-client.crt: {Name:mk795b280bcd9c59cf78ec03ece9d4b0753eaaa6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:47:12.603755  251263 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/proxy-client.key ...
	I1121 23:47:12.603779  251263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/proxy-client.key: {Name:mkfe4eecc4523b56c0d41272318c6e77ecb4dd52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:47:12.604032  251263 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-244751/.minikube/certs/ca-key.pem (1675 bytes)
	I1121 23:47:12.604112  251263 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-244751/.minikube/certs/ca.pem (1078 bytes)
	I1121 23:47:12.604152  251263 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-244751/.minikube/certs/cert.pem (1123 bytes)
	I1121 23:47:12.604194  251263 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-244751/.minikube/certs/key.pem (1679 bytes)
	I1121 23:47:12.604861  251263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1121 23:47:12.637531  251263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1121 23:47:12.669272  251263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1121 23:47:12.700033  251263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1121 23:47:12.730398  251263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1121 23:47:12.766760  251263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1121 23:47:12.814595  251263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1121 23:47:12.848615  251263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1121 23:47:12.879920  251263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1121 23:47:12.912022  251263 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1121 23:47:12.933857  251263 ssh_runner.go:195] Run: openssl version
	I1121 23:47:12.940506  251263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1121 23:47:12.953948  251263 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1121 23:47:12.959503  251263 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 23:47 /usr/share/ca-certificates/minikubeCA.pem
	I1121 23:47:12.959560  251263 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1121 23:47:12.967627  251263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1121 23:47:12.981398  251263 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1121 23:47:12.986879  251263 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1121 23:47:12.986957  251263 kubeadm.go:401] StartCluster: {Name:addons-266876 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-266876 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.50 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 23:47:12.987064  251263 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 23:47:12.987158  251263 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 23:47:13.025633  251263 cri.go:89] found id: ""
	I1121 23:47:13.025741  251263 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1121 23:47:13.038755  251263 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1121 23:47:13.052370  251263 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1121 23:47:13.065036  251263 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1121 23:47:13.065062  251263 kubeadm.go:158] found existing configuration files:
	
	I1121 23:47:13.065139  251263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1121 23:47:13.077032  251263 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1121 23:47:13.077097  251263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1121 23:47:13.090073  251263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1121 23:47:13.101398  251263 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1121 23:47:13.101465  251263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1121 23:47:13.114396  251263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1121 23:47:13.126235  251263 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1121 23:47:13.126304  251263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1121 23:47:13.139694  251263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1121 23:47:13.151819  251263 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1121 23:47:13.151882  251263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1121 23:47:13.164512  251263 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1121 23:47:13.226756  251263 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1121 23:47:13.226832  251263 kubeadm.go:319] [preflight] Running pre-flight checks
	I1121 23:47:13.345339  251263 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1121 23:47:13.345491  251263 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1121 23:47:13.345647  251263 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1121 23:47:13.359341  251263 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1121 23:47:13.436841  251263 out.go:252]   - Generating certificates and keys ...
	I1121 23:47:13.437031  251263 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1121 23:47:13.437171  251263 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1121 23:47:13.558105  251263 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1121 23:47:13.651102  251263 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1121 23:47:13.902476  251263 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1121 23:47:14.134826  251263 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1121 23:47:14.345459  251263 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1121 23:47:14.345645  251263 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-266876 localhost] and IPs [192.168.39.50 127.0.0.1 ::1]
	I1121 23:47:14.583497  251263 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1121 23:47:14.583717  251263 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-266876 localhost] and IPs [192.168.39.50 127.0.0.1 ::1]
	I1121 23:47:14.931062  251263 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1121 23:47:15.434495  251263 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1121 23:47:15.838983  251263 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1121 23:47:15.839096  251263 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1121 23:47:15.963541  251263 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1121 23:47:16.269311  251263 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1121 23:47:16.929016  251263 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1121 23:47:17.056928  251263 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1121 23:47:17.384976  251263 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1121 23:47:17.385309  251263 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1121 23:47:17.387510  251263 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1121 23:47:17.389626  251263 out.go:252]   - Booting up control plane ...
	I1121 23:47:17.389730  251263 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1121 23:47:17.389802  251263 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1121 23:47:17.389859  251263 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1121 23:47:17.408245  251263 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1121 23:47:17.408393  251263 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1121 23:47:17.416098  251263 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1121 23:47:17.416463  251263 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1121 23:47:17.416528  251263 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1121 23:47:17.572061  251263 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1121 23:47:17.572273  251263 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1121 23:47:18.575810  251263 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.003449114s
	I1121 23:47:18.581453  251263 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1121 23:47:18.581592  251263 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.50:8443/livez
	I1121 23:47:18.581745  251263 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1121 23:47:18.581872  251263 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1121 23:47:21.444953  251263 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.865438426s
	I1121 23:47:22.473854  251263 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.895647364s
	I1121 23:47:24.581213  251263 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.003558147s
	I1121 23:47:24.600634  251263 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1121 23:47:24.621062  251263 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1121 23:47:24.638002  251263 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1121 23:47:24.638263  251263 kubeadm.go:319] [mark-control-plane] Marking the node addons-266876 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1121 23:47:24.652039  251263 kubeadm.go:319] [bootstrap-token] Using token: grn95n.s74ahx9w73uu3ca1
	I1121 23:47:24.653732  251263 out.go:252]   - Configuring RBAC rules ...
	I1121 23:47:24.653880  251263 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1121 23:47:24.659155  251263 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1121 23:47:24.672314  251263 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1121 23:47:24.680496  251263 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1121 23:47:24.684483  251263 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1121 23:47:24.688905  251263 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1121 23:47:24.990519  251263 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1121 23:47:25.446692  251263 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1121 23:47:25.987142  251263 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1121 23:47:25.988495  251263 kubeadm.go:319] 
	I1121 23:47:25.988586  251263 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1121 23:47:25.988628  251263 kubeadm.go:319] 
	I1121 23:47:25.988755  251263 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1121 23:47:25.988774  251263 kubeadm.go:319] 
	I1121 23:47:25.988799  251263 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1121 23:47:25.988879  251263 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1121 23:47:25.988970  251263 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1121 23:47:25.988990  251263 kubeadm.go:319] 
	I1121 23:47:25.989051  251263 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1121 23:47:25.989061  251263 kubeadm.go:319] 
	I1121 23:47:25.989146  251263 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1121 23:47:25.989158  251263 kubeadm.go:319] 
	I1121 23:47:25.989248  251263 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1121 23:47:25.989366  251263 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1121 23:47:25.989475  251263 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1121 23:47:25.989488  251263 kubeadm.go:319] 
	I1121 23:47:25.989602  251263 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1121 23:47:25.989728  251263 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1121 23:47:25.989738  251263 kubeadm.go:319] 
	I1121 23:47:25.989856  251263 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token grn95n.s74ahx9w73uu3ca1 \
	I1121 23:47:25.990007  251263 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7035eabdb6dc9c299f99d6120e0649f8a13de0412ab5d63e88dba6debc1b302c \
	I1121 23:47:25.990049  251263 kubeadm.go:319] 	--control-plane 
	I1121 23:47:25.990057  251263 kubeadm.go:319] 
	I1121 23:47:25.990176  251263 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1121 23:47:25.990186  251263 kubeadm.go:319] 
	I1121 23:47:25.990300  251263 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token grn95n.s74ahx9w73uu3ca1 \
	I1121 23:47:25.990438  251263 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7035eabdb6dc9c299f99d6120e0649f8a13de0412ab5d63e88dba6debc1b302c 
	I1121 23:47:25.992560  251263 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1121 23:47:25.992602  251263 cni.go:84] Creating CNI manager for ""
	I1121 23:47:25.992623  251263 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1121 23:47:25.994543  251263 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1121 23:47:25.996106  251263 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1121 23:47:26.010555  251263 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1121 23:47:26.033834  251263 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1121 23:47:26.033972  251263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 23:47:26.033980  251263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-266876 minikube.k8s.io/updated_at=2025_11_21T23_47_26_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785 minikube.k8s.io/name=addons-266876 minikube.k8s.io/primary=true
	I1121 23:47:26.084057  251263 ops.go:34] apiserver oom_adj: -16
	I1121 23:47:26.203325  251263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 23:47:26.704291  251263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 23:47:27.204057  251263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 23:47:27.704402  251263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 23:47:28.204383  251263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 23:47:28.704103  251263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 23:47:29.204400  251263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 23:47:29.704060  251263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 23:47:30.204340  251263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 23:47:30.314187  251263 kubeadm.go:1114] duration metric: took 4.280316282s to wait for elevateKubeSystemPrivileges
	I1121 23:47:30.314239  251263 kubeadm.go:403] duration metric: took 17.327291456s to StartCluster
	I1121 23:47:30.314270  251263 settings.go:142] acquiring lock: {Name:mkd124ec98418d6d2386a8f1a0e2e5ff6f0f99d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:47:30.314449  251263 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21934-244751/kubeconfig
	I1121 23:47:30.314952  251263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-244751/kubeconfig: {Name:mkbde37dbfe874aace118914fefd91b607e3afff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:47:30.315195  251263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1121 23:47:30.315224  251263 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.50 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1121 23:47:30.315300  251263 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1121 23:47:30.315425  251263 addons.go:70] Setting yakd=true in profile "addons-266876"
	I1121 23:47:30.315450  251263 addons.go:239] Setting addon yakd=true in "addons-266876"
	I1121 23:47:30.315462  251263 addons.go:70] Setting inspektor-gadget=true in profile "addons-266876"
	I1121 23:47:30.315485  251263 host.go:66] Checking if "addons-266876" exists ...
	I1121 23:47:30.315491  251263 addons.go:239] Setting addon inspektor-gadget=true in "addons-266876"
	I1121 23:47:30.315501  251263 addons.go:70] Setting default-storageclass=true in profile "addons-266876"
	I1121 23:47:30.315529  251263 host.go:66] Checking if "addons-266876" exists ...
	I1121 23:47:30.315528  251263 config.go:182] Loaded profile config "addons-266876": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:47:30.315544  251263 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-266876"
	I1121 23:47:30.315569  251263 addons.go:70] Setting cloud-spanner=true in profile "addons-266876"
	I1121 23:47:30.315601  251263 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-266876"
	I1121 23:47:30.315604  251263 addons.go:70] Setting registry-creds=true in profile "addons-266876"
	I1121 23:47:30.315608  251263 addons.go:239] Setting addon cloud-spanner=true in "addons-266876"
	I1121 23:47:30.315620  251263 addons.go:239] Setting addon registry-creds=true in "addons-266876"
	I1121 23:47:30.315642  251263 host.go:66] Checking if "addons-266876" exists ...
	I1121 23:47:30.315644  251263 host.go:66] Checking if "addons-266876" exists ...
	I1121 23:47:30.315644  251263 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-266876"
	I1121 23:47:30.315691  251263 host.go:66] Checking if "addons-266876" exists ...
	I1121 23:47:30.315903  251263 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-266876"
	I1121 23:47:30.315921  251263 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-266876"
	I1121 23:47:30.315947  251263 host.go:66] Checking if "addons-266876" exists ...
	I1121 23:47:30.316235  251263 addons.go:70] Setting ingress=true in profile "addons-266876"
	I1121 23:47:30.316274  251263 addons.go:239] Setting addon ingress=true in "addons-266876"
	I1121 23:47:30.316310  251263 host.go:66] Checking if "addons-266876" exists ...
	I1121 23:47:30.316663  251263 addons.go:70] Setting registry=true in profile "addons-266876"
	I1121 23:47:30.316697  251263 addons.go:239] Setting addon registry=true in "addons-266876"
	I1121 23:47:30.316723  251263 host.go:66] Checking if "addons-266876" exists ...
	I1121 23:47:30.317068  251263 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-266876"
	I1121 23:47:30.317089  251263 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-266876"
	I1121 23:47:30.317115  251263 host.go:66] Checking if "addons-266876" exists ...
	I1121 23:47:30.317160  251263 addons.go:70] Setting gcp-auth=true in profile "addons-266876"
	I1121 23:47:30.315588  251263 addons.go:70] Setting ingress-dns=true in profile "addons-266876"
	I1121 23:47:30.317206  251263 mustload.go:66] Loading cluster: addons-266876
	I1121 23:47:30.317231  251263 addons.go:239] Setting addon ingress-dns=true in "addons-266876"
	I1121 23:47:30.317253  251263 addons.go:70] Setting metrics-server=true in profile "addons-266876"
	I1121 23:47:30.317268  251263 host.go:66] Checking if "addons-266876" exists ...
	I1121 23:47:30.317272  251263 addons.go:239] Setting addon metrics-server=true in "addons-266876"
	I1121 23:47:30.317299  251263 host.go:66] Checking if "addons-266876" exists ...
	I1121 23:47:30.317400  251263 config.go:182] Loaded profile config "addons-266876": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:47:30.317441  251263 addons.go:70] Setting storage-provisioner=true in profile "addons-266876"
	I1121 23:47:30.317460  251263 addons.go:239] Setting addon storage-provisioner=true in "addons-266876"
	I1121 23:47:30.317490  251263 host.go:66] Checking if "addons-266876" exists ...
	I1121 23:47:30.317944  251263 addons.go:70] Setting volcano=true in profile "addons-266876"
	I1121 23:47:30.317973  251263 addons.go:239] Setting addon volcano=true in "addons-266876"
	I1121 23:47:30.318000  251263 host.go:66] Checking if "addons-266876" exists ...
	I1121 23:47:30.318181  251263 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-266876"
	I1121 23:47:30.318207  251263 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-266876"
	I1121 23:47:30.318457  251263 addons.go:70] Setting volumesnapshots=true in profile "addons-266876"
	I1121 23:47:30.318489  251263 addons.go:239] Setting addon volumesnapshots=true in "addons-266876"
	I1121 23:47:30.318514  251263 host.go:66] Checking if "addons-266876" exists ...
	I1121 23:47:30.318636  251263 out.go:179] * Verifying Kubernetes components...
	I1121 23:47:30.321872  251263 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 23:47:30.323979  251263 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1121 23:47:30.324015  251263 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1121 23:47:30.324059  251263 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.43
	I1121 23:47:30.324308  251263 addons.go:239] Setting addon default-storageclass=true in "addons-266876"
	I1121 23:47:30.324852  251263 host.go:66] Checking if "addons-266876" exists ...
	I1121 23:47:30.325430  251263 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1121 23:47:30.325460  251263 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1121 23:47:30.325834  251263 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1121 23:47:30.325536  251263 host.go:66] Checking if "addons-266876" exists ...
	I1121 23:47:30.326179  251263 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1121 23:47:30.326187  251263 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1121 23:47:30.326317  251263 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1121 23:47:30.326336  251263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1121 23:47:30.326936  251263 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1121 23:47:30.326998  251263 out.go:179]   - Using image docker.io/registry:3.0.0
	I1121 23:47:30.326980  251263 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1121 23:47:30.327044  251263 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1121 23:47:30.327543  251263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	W1121 23:47:30.327112  251263 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1121 23:47:30.327823  251263 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1121 23:47:30.327894  251263 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1121 23:47:30.328316  251263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1121 23:47:30.327908  251263 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1121 23:47:30.327937  251263 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1121 23:47:30.328129  251263 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-266876"
	I1121 23:47:30.328994  251263 host.go:66] Checking if "addons-266876" exists ...
	I1121 23:47:30.328605  251263 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1121 23:47:30.328665  251263 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 23:47:30.328694  251263 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1121 23:47:30.330248  251263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1121 23:47:30.329173  251263 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1121 23:47:30.330310  251263 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1121 23:47:30.330603  251263 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1121 23:47:30.330604  251263 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1121 23:47:30.331083  251263 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1121 23:47:30.330604  251263 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1121 23:47:30.330630  251263 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1121 23:47:30.331264  251263 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1121 23:47:30.330646  251263 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1121 23:47:30.330654  251263 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1121 23:47:30.331990  251263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1121 23:47:30.330703  251263 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1121 23:47:30.332116  251263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1121 23:47:30.331545  251263 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 23:47:30.332194  251263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1121 23:47:30.332542  251263 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1121 23:47:30.332882  251263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1121 23:47:30.334102  251263 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1121 23:47:30.334436  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.335240  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.335327  251263 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1121 23:47:30.335355  251263 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1121 23:47:30.336111  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:30.336119  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.336147  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.336581  251263 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1121 23:47:30.336829  251263 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1121 23:47:30.336847  251263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1121 23:47:30.336857  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:30.336898  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.336963  251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
	I1121 23:47:30.337875  251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
	I1121 23:47:30.337944  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:30.337986  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.338791  251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
	I1121 23:47:30.338889  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.339032  251263 out.go:179]   - Using image docker.io/busybox:stable
	I1121 23:47:30.339781  251263 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1121 23:47:30.340483  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:30.340514  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.340666  251263 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1121 23:47:30.340695  251263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1121 23:47:30.340797  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.341117  251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
	I1121 23:47:30.341357  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.342122  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.342189  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:30.342220  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.342778  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:30.342795  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.342811  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.342975  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.343022  251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
	I1121 23:47:30.343206  251263 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1121 23:47:30.343363  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.343504  251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
	I1121 23:47:30.343566  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:30.343596  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.344162  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.344636  251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
	I1121 23:47:30.344648  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:30.344718  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.344930  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:30.344977  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.345068  251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
	I1121 23:47:30.345337  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:30.345379  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.345381  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.345342  251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
	I1121 23:47:30.345569  251263 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1121 23:47:30.345654  251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
	I1121 23:47:30.346248  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:30.346289  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.346396  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:30.346427  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.346508  251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
	I1121 23:47:30.346706  251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
	I1121 23:47:30.346995  251263 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1121 23:47:30.347011  251263 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1121 23:47:30.347328  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.347842  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:30.347873  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.348042  251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
	I1121 23:47:30.348168  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.348658  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:30.348696  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.348924  251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
	I1121 23:47:30.349955  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.350423  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:30.350455  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.350644  251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
	W1121 23:47:30.571554  251263 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:53184->192.168.39.50:22: read: connection reset by peer
	I1121 23:47:30.571604  251263 retry.go:31] will retry after 237.893493ms: ssh: handshake failed: read tcp 192.168.39.1:53184->192.168.39.50:22: read: connection reset by peer
	W1121 23:47:30.594670  251263 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:53214->192.168.39.50:22: read: connection reset by peer
	I1121 23:47:30.594718  251263 retry.go:31] will retry after 219.796697ms: ssh: handshake failed: read tcp 192.168.39.1:53214->192.168.39.50:22: read: connection reset by peer
	W1121 23:47:30.648821  251263 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:53232->192.168.39.50:22: read: connection reset by peer
	I1121 23:47:30.648855  251263 retry.go:31] will retry after 280.923937ms: ssh: handshake failed: read tcp 192.168.39.1:53232->192.168.39.50:22: read: connection reset by peer
	I1121 23:47:30.906273  251263 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 23:47:30.906343  251263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1121 23:47:31.303471  251263 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1121 23:47:31.303497  251263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1121 23:47:31.303519  251263 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1121 23:47:31.329075  251263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1121 23:47:31.372362  251263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1121 23:47:31.401245  251263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1121 23:47:31.443583  251263 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1121 23:47:31.443617  251263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1121 23:47:31.448834  251263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1121 23:47:31.496006  251263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1121 23:47:31.498539  251263 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1121 23:47:31.498563  251263 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1121 23:47:31.569835  251263 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1121 23:47:31.569869  251263 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1121 23:47:31.572494  251263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1121 23:47:31.624422  251263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1121 23:47:31.627643  251263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 23:47:31.900562  251263 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1121 23:47:31.900602  251263 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1121 23:47:32.010439  251263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1121 23:47:32.024813  251263 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1121 23:47:32.024876  251263 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1121 23:47:32.170850  251263 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1121 23:47:32.170888  251263 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1121 23:47:32.219733  251263 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1121 23:47:32.219791  251263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1121 23:47:32.404951  251263 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1121 23:47:32.404996  251263 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1121 23:47:32.544216  251263 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1121 23:47:32.544253  251263 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1121 23:47:32.578250  251263 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1121 23:47:32.578284  251263 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1121 23:47:32.653254  251263 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1121 23:47:32.653285  251263 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1121 23:47:32.741481  251263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1121 23:47:32.794874  251263 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1121 23:47:32.794909  251263 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1121 23:47:32.881148  251263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1121 23:47:33.067639  251263 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1121 23:47:33.067700  251263 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1121 23:47:33.067715  251263 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1121 23:47:33.067738  251263 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1121 23:47:33.271805  251263 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1121 23:47:33.271834  251263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1121 23:47:33.312325  251263 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1121 23:47:33.312356  251263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1121 23:47:33.436072  251263 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1121 23:47:33.436107  251263 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1121 23:47:33.708500  251263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1121 23:47:33.708927  251263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1121 23:47:34.040431  251263 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1121 23:47:34.040474  251263 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1121 23:47:34.408465  251263 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.502153253s)
	I1121 23:47:34.408519  251263 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.502134143s)
	I1121 23:47:34.408554  251263 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1121 23:47:34.408578  251263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.105046996s)
	I1121 23:47:34.409219  251263 node_ready.go:35] waiting up to 6m0s for node "addons-266876" to be "Ready" ...
	I1121 23:47:34.415213  251263 node_ready.go:49] node "addons-266876" is "Ready"
	I1121 23:47:34.415248  251263 node_ready.go:38] duration metric: took 6.005684ms for node "addons-266876" to be "Ready" ...
	I1121 23:47:34.415268  251263 api_server.go:52] waiting for apiserver process to appear ...
	I1121 23:47:34.415324  251263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 23:47:34.664082  251263 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1121 23:47:34.664113  251263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1121 23:47:34.918427  251263 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-266876" context rescaled to 1 replicas
	I1121 23:47:35.149255  251263 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1121 23:47:35.149293  251263 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1121 23:47:35.732395  251263 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1121 23:47:35.732425  251263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1121 23:47:36.406188  251263 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1121 23:47:36.406216  251263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1121 23:47:36.897571  251263 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1121 23:47:36.897608  251263 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1121 23:47:37.313754  251263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1121 23:47:37.790744  251263 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1121 23:47:37.793928  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:37.794570  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:37.794603  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:37.794806  251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
	I1121 23:47:38.530200  251263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (7.201079248s)
	I1121 23:47:38.530311  251263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.15790373s)
	I1121 23:47:38.530349  251263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.129067228s)
	I1121 23:47:38.530410  251263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (7.081551551s)
	I1121 23:47:38.530485  251263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.034438414s)
	I1121 23:47:38.530531  251263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.958009964s)
	I1121 23:47:38.530576  251263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (6.90611639s)
	I1121 23:47:38.530688  251263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.902998512s)
	W1121 23:47:38.596091  251263 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1121 23:47:38.696471  251263 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1121 23:47:39.049239  251263 addons.go:239] Setting addon gcp-auth=true in "addons-266876"
	I1121 23:47:39.049319  251263 host.go:66] Checking if "addons-266876" exists ...
	I1121 23:47:39.051589  251263 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1121 23:47:39.054431  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:39.054905  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:39.054946  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:39.055124  251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
	I1121 23:47:40.911949  251263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.901459816s)
	I1121 23:47:40.912003  251263 addons.go:495] Verifying addon ingress=true in "addons-266876"
	I1121 23:47:40.912027  251263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.170505015s)
	I1121 23:47:40.912060  251263 addons.go:495] Verifying addon registry=true in "addons-266876"
	I1121 23:47:40.912106  251263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.030918863s)
	I1121 23:47:40.912208  251263 addons.go:495] Verifying addon metrics-server=true in "addons-266876"
	I1121 23:47:40.913759  251263 out.go:179] * Verifying ingress addon...
	I1121 23:47:40.913769  251263 out.go:179] * Verifying registry addon...
	I1121 23:47:40.916006  251263 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1121 23:47:40.916028  251263 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1121 23:47:41.040220  251263 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1121 23:47:41.040250  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:41.043403  251263 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1121 23:47:41.043428  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:41.261875  251263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.5533177s)
	W1121 23:47:41.261945  251263 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1121 23:47:41.261983  251263 retry.go:31] will retry after 128.365697ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1121 23:47:41.262010  251263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.553035838s)
	I1121 23:47:41.262077  251263 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (6.846726255s)
	I1121 23:47:41.262115  251263 api_server.go:72] duration metric: took 10.946861397s to wait for apiserver process to appear ...
	I1121 23:47:41.262194  251263 api_server.go:88] waiting for apiserver healthz status ...
	I1121 23:47:41.262220  251263 api_server.go:253] Checking apiserver healthz at https://192.168.39.50:8443/healthz ...
	I1121 23:47:41.263907  251263 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-266876 service yakd-dashboard -n yakd-dashboard
	
	I1121 23:47:41.282742  251263 api_server.go:279] https://192.168.39.50:8443/healthz returned 200:
	ok
	I1121 23:47:41.287497  251263 api_server.go:141] control plane version: v1.34.1
	I1121 23:47:41.287535  251263 api_server.go:131] duration metric: took 25.332513ms to wait for apiserver health ...
	I1121 23:47:41.287548  251263 system_pods.go:43] waiting for kube-system pods to appear ...
	I1121 23:47:41.306603  251263 system_pods.go:59] 16 kube-system pods found
	I1121 23:47:41.306658  251263 system_pods.go:61] "amd-gpu-device-plugin-pd4sx" [88fffae7-a3c2-46ef-a382-867c1f45dd2f] Running
	I1121 23:47:41.306672  251263 system_pods.go:61] "coredns-66bc5c9577-kmf4p" [c2dae9ee-3a8e-4c5c-9880-256aae9475c1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 23:47:41.306696  251263 system_pods.go:61] "coredns-66bc5c9577-tgk67" [ad56ae13-a7c4-44e3-a817-73aa300110b6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 23:47:41.306706  251263 system_pods.go:61] "etcd-addons-266876" [6329b2aa-df0d-4707-8094-52ed6a9b70fa] Running
	I1121 23:47:41.306714  251263 system_pods.go:61] "kube-apiserver-addons-266876" [d6500ed5-1e8a-40e7-8761-ce5d9b817580] Running
	I1121 23:47:41.306720  251263 system_pods.go:61] "kube-controller-manager-addons-266876" [9afdbaac-04de-4ae4-a1a5-ab74382c1ee4] Running
	I1121 23:47:41.306728  251263 system_pods.go:61] "kube-ingress-dns-minikube" [4c8445af-f050-4525-a580-c6cb45567d21] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1121 23:47:41.306737  251263 system_pods.go:61] "kube-proxy-d6jsf" [8c9f1dbf-19b7-4f19-8f33-11b6886f1237] Running
	I1121 23:47:41.306742  251263 system_pods.go:61] "kube-scheduler-addons-266876" [c367cb26-5fd3-4d41-8752-d8b7d6fe6c13] Running
	I1121 23:47:41.306749  251263 system_pods.go:61] "metrics-server-85b7d694d7-tcd7p" [8b9a51ce-61d0-430c-98de-9174d78d47d6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1121 23:47:41.306759  251263 system_pods.go:61] "nvidia-device-plugin-daemonset-6fx49" [4603aca8-96ff-429c-870e-aaa1a7987b07] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1121 23:47:41.306768  251263 system_pods.go:61] "registry-6b586f9694-g5tcd" [5c882c89-9d82-4657-b7a0-d20f145866ab] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1121 23:47:41.306780  251263 system_pods.go:61] "registry-creds-764b6fb674-c6k42" [b80bedd0-b303-4ba0-9c40-f2fd2464333c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1121 23:47:41.306789  251263 system_pods.go:61] "registry-proxy-xjdbm" [9ce7be2d-00fc-42ac-8617-38e2d4ecac77] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1121 23:47:41.306795  251263 system_pods.go:61] "snapshot-controller-7d9fbc56b8-r57wx" [136bb70d-9950-46db-83d9-09b543dc4f72] Pending
	I1121 23:47:41.306803  251263 system_pods.go:61] "storage-provisioner" [2855a3de-b990-447c-b094-274b5becf1da] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 23:47:41.306812  251263 system_pods.go:74] duration metric: took 19.257263ms to wait for pod list to return data ...
	I1121 23:47:41.306823  251263 default_sa.go:34] waiting for default service account to be created ...
	I1121 23:47:41.323263  251263 default_sa.go:45] found service account: "default"
	I1121 23:47:41.323302  251263 default_sa.go:55] duration metric: took 16.457401ms for default service account to be created ...
	I1121 23:47:41.323317  251263 system_pods.go:116] waiting for k8s-apps to be running ...
	I1121 23:47:41.337749  251263 system_pods.go:86] 17 kube-system pods found
	I1121 23:47:41.337783  251263 system_pods.go:89] "amd-gpu-device-plugin-pd4sx" [88fffae7-a3c2-46ef-a382-867c1f45dd2f] Running
	I1121 23:47:41.337791  251263 system_pods.go:89] "coredns-66bc5c9577-kmf4p" [c2dae9ee-3a8e-4c5c-9880-256aae9475c1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 23:47:41.337797  251263 system_pods.go:89] "coredns-66bc5c9577-tgk67" [ad56ae13-a7c4-44e3-a817-73aa300110b6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 23:47:41.337803  251263 system_pods.go:89] "etcd-addons-266876" [6329b2aa-df0d-4707-8094-52ed6a9b70fa] Running
	I1121 23:47:41.337808  251263 system_pods.go:89] "kube-apiserver-addons-266876" [d6500ed5-1e8a-40e7-8761-ce5d9b817580] Running
	I1121 23:47:41.337812  251263 system_pods.go:89] "kube-controller-manager-addons-266876" [9afdbaac-04de-4ae4-a1a5-ab74382c1ee4] Running
	I1121 23:47:41.337817  251263 system_pods.go:89] "kube-ingress-dns-minikube" [4c8445af-f050-4525-a580-c6cb45567d21] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1121 23:47:41.337821  251263 system_pods.go:89] "kube-proxy-d6jsf" [8c9f1dbf-19b7-4f19-8f33-11b6886f1237] Running
	I1121 23:47:41.337826  251263 system_pods.go:89] "kube-scheduler-addons-266876" [c367cb26-5fd3-4d41-8752-d8b7d6fe6c13] Running
	I1121 23:47:41.337831  251263 system_pods.go:89] "metrics-server-85b7d694d7-tcd7p" [8b9a51ce-61d0-430c-98de-9174d78d47d6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1121 23:47:41.337839  251263 system_pods.go:89] "nvidia-device-plugin-daemonset-6fx49" [4603aca8-96ff-429c-870e-aaa1a7987b07] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1121 23:47:41.337844  251263 system_pods.go:89] "registry-6b586f9694-g5tcd" [5c882c89-9d82-4657-b7a0-d20f145866ab] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1121 23:47:41.337849  251263 system_pods.go:89] "registry-creds-764b6fb674-c6k42" [b80bedd0-b303-4ba0-9c40-f2fd2464333c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1121 23:47:41.337854  251263 system_pods.go:89] "registry-proxy-xjdbm" [9ce7be2d-00fc-42ac-8617-38e2d4ecac77] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1121 23:47:41.337876  251263 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gcprx" [38cf49f5-ed6e-4aa5-bdfe-2494e5763f39] Pending
	I1121 23:47:41.337881  251263 system_pods.go:89] "snapshot-controller-7d9fbc56b8-r57wx" [136bb70d-9950-46db-83d9-09b543dc4f72] Pending
	I1121 23:47:41.337885  251263 system_pods.go:89] "storage-provisioner" [2855a3de-b990-447c-b094-274b5becf1da] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 23:47:41.337897  251263 system_pods.go:126] duration metric: took 14.572276ms to wait for k8s-apps to be running ...
	I1121 23:47:41.337909  251263 system_svc.go:44] waiting for kubelet service to be running ....
	I1121 23:47:41.337964  251263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 23:47:41.391055  251263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1121 23:47:41.444001  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:41.452955  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:41.927933  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:41.929997  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:42.455799  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:42.455860  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:42.926969  251263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.613140073s)
	I1121 23:47:42.927027  251263 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-266876"
	I1121 23:47:42.927049  251263 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.875424504s)
	I1121 23:47:42.927114  251263 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.589124511s)
	I1121 23:47:42.927233  251263 system_svc.go:56] duration metric: took 1.589318384s WaitForService to wait for kubelet
	I1121 23:47:42.927248  251263 kubeadm.go:587] duration metric: took 12.611994145s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 23:47:42.927275  251263 node_conditions.go:102] verifying NodePressure condition ...
	I1121 23:47:42.928903  251263 out.go:179] * Verifying csi-hostpath-driver addon...
	I1121 23:47:42.928918  251263 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1121 23:47:42.930225  251263 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1121 23:47:42.930998  251263 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1121 23:47:42.931460  251263 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1121 23:47:42.931483  251263 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1121 23:47:42.948957  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:42.956545  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:42.972599  251263 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1121 23:47:42.972629  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:42.991010  251263 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1121 23:47:42.991043  251263 node_conditions.go:123] node cpu capacity is 2
	I1121 23:47:42.991060  251263 node_conditions.go:105] duration metric: took 63.779822ms to run NodePressure ...
	I1121 23:47:42.991073  251263 start.go:242] waiting for startup goroutines ...
	I1121 23:47:43.000454  251263 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1121 23:47:43.000488  251263 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1121 23:47:43.064083  251263 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1121 23:47:43.064114  251263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1121 23:47:43.143418  251263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1121 23:47:43.424997  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:43.428350  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:43.438981  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:43.744014  251263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.352903636s)
	I1121 23:47:43.926051  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:43.926403  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:43.939557  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:44.470136  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:44.470507  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:44.470583  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:44.610973  251263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.467509011s)
	I1121 23:47:44.612084  251263 addons.go:495] Verifying addon gcp-auth=true in "addons-266876"
	I1121 23:47:44.614664  251263 out.go:179] * Verifying gcp-auth addon...
	I1121 23:47:44.617037  251263 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1121 23:47:44.679516  251263 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1121 23:47:44.679539  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:44.938585  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:44.939917  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:44.945173  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:45.125511  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:45.423184  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:45.424380  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:45.438459  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:45.621893  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:45.929603  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:45.933258  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:45.938917  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:46.123924  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:46.423081  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:46.425799  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:46.437310  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:46.623291  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:46.925943  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:46.926661  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:46.940308  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:47.120567  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:47.421527  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:47.422825  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:47.435356  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:47.622778  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:47.922908  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:47.925722  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:47.937113  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:48.122097  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:48.423467  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:48.423610  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:48.435064  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:48.622264  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:48.926889  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:48.926907  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:48.935809  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:49.124186  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:49.424165  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:49.424235  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:49.436947  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:49.623380  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:49.926485  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:49.926568  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:49.934726  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:50.149039  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:50.426766  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:50.427550  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:50.435800  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:50.623645  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:50.923166  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:50.924899  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:50.937932  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:51.120970  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:51.422946  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:51.423964  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:51.437143  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:51.623848  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:51.924227  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:51.929471  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:51.939629  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:52.261854  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:52.424962  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:52.428597  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:52.436986  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:52.622910  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:52.922271  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:52.924973  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:52.938365  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:53.121701  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:53.425753  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:53.438148  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:53.440564  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:53.709895  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:53.929068  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:53.931342  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:53.938714  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:54.122158  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:54.425360  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:54.428330  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:54.435907  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:54.623125  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:54.926160  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:54.926269  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:54.934959  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:55.123657  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:55.422851  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:55.423292  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:55.436852  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:55.621782  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:56.184531  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:56.185319  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:56.185351  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:56.185436  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:56.422356  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:56.422605  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:56.437477  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:56.621926  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:56.920916  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:56.921374  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:56.935238  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:57.120293  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:57.422033  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:57.424320  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:57.435388  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:57.621432  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:57.920963  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:57.924452  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:57.935839  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:58.121584  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:58.425091  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:58.425156  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:58.435426  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:58.635444  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:58.922739  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:58.923871  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:58.936112  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:59.123863  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:59.426020  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:59.430811  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:59.438808  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:59.623106  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:59.931900  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:59.936038  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:59.937959  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:00.122854  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:00.422993  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:00.424741  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:00.436196  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:00.620554  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:00.921652  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:00.922569  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:00.935087  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:01.123823  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:01.423850  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:01.425512  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:01.434928  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:01.621491  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:01.923505  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:01.924905  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:01.937201  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:02.121624  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:02.423602  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:02.423787  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:02.435107  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:02.620510  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:02.919996  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:02.921258  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:02.934427  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:03.121234  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:03.422602  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:03.422661  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:03.435654  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:03.627887  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:03.923184  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:03.923492  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:03.943565  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:04.122960  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:04.421986  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:04.422381  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:04.435361  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:04.623019  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:04.923848  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:04.925058  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:04.935882  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:05.121708  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:05.421718  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:05.421805  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:05.434879  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:05.622686  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:05.922353  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:05.923753  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:05.936216  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:06.120868  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:06.423712  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:06.423899  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:06.439806  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:06.625663  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:06.922260  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:06.922652  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:06.936062  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:07.121430  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:07.424027  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:07.424073  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:07.435511  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:07.622294  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:07.921125  251263 kapi.go:107] duration metric: took 27.005089483s to wait for kubernetes.io/minikube-addons=registry ...
	I1121 23:48:07.923396  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:07.939621  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:08.121478  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:08.519292  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:08.522400  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:08.626487  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:08.919824  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:08.935099  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:09.123034  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:09.427247  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:09.439663  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:09.630747  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:09.924829  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:09.937762  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:10.126266  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:10.423912  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:10.442758  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:10.829148  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:10.928186  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:10.938788  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:11.126344  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:11.423503  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:11.440161  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:11.628256  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:11.922200  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:12.026774  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:12.122410  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:12.425763  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:12.435748  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:12.620552  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:12.954050  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:12.957856  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:13.126813  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:13.421360  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:13.435025  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:13.629500  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:13.922707  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:13.935410  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:14.123341  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:14.426174  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:14.436803  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:14.622210  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:14.941433  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:14.941557  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:15.122789  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:15.422344  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:15.435838  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:15.620803  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:15.922769  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:15.936263  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:16.123330  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:16.420710  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:16.437443  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:16.622053  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:16.922695  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:16.940782  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:17.241963  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:17.422836  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:17.436564  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:17.623372  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:17.919854  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:17.948897  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:18.124153  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:18.423733  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:18.436717  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:18.622046  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:18.922805  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:18.935793  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:19.122329  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:19.425051  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:19.439118  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:19.619916  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:19.920748  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:19.937662  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:20.128846  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:20.427312  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:20.441072  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:20.627540  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:20.922225  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:20.935498  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:21.125438  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:21.421980  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:21.435607  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:21.622394  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:21.920638  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:21.935580  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:22.121779  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:22.425387  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:22.436106  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:22.622379  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:22.922035  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:22.939454  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:23.123644  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:23.422127  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:23.437099  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:23.621255  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:23.921598  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:23.936278  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:24.121938  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:24.421559  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:24.435263  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:24.621048  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:24.921427  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:24.936154  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:25.128780  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:25.436990  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:25.447989  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:25.627750  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:25.925784  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:25.936653  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:26.125097  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:26.421139  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:26.435288  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:26.621354  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:26.979865  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:26.982130  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:27.121596  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:27.421737  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:27.436413  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:27.622223  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:27.923259  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:27.938238  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:28.122777  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:28.422102  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:28.435098  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:28.624943  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:28.923578  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:28.934884  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:29.123227  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:29.422918  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:29.440055  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:29.621947  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:29.924766  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:29.943765  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:30.125218  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:30.427521  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:30.435473  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:30.622346  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:30.926321  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:30.935211  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:31.125820  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:31.423165  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:31.435981  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:31.624574  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:31.924255  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:31.937572  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:32.123297  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:32.420253  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:32.435092  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:32.620642  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:32.924708  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:32.936867  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:33.122959  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:33.421260  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:33.435115  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:33.622355  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:33.922446  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:33.937891  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:34.121936  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:34.422837  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:34.436876  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:34.621392  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:34.922989  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:34.936968  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:35.121994  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:35.420314  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:35.435229  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:35.620372  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:35.921246  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:35.935379  251263 kapi.go:107] duration metric: took 53.004380156s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1121 23:48:36.121002  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:36.421297  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:36.620475  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:36.920737  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:37.121903  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:37.420740  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:37.621573  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:37.920470  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:38.120871  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:38.419747  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:38.620870  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:38.919569  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:39.121472  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:39.420632  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:39.621914  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:39.919274  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:40.120595  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:40.420718  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:40.621509  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:40.920672  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:41.121166  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:41.422011  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:41.622380  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:41.921196  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:42.120596  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:42.420828  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:42.621388  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:42.921558  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:43.121925  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:43.419853  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:43.622393  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:43.920887  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:44.121285  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:44.420735  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:44.622063  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:44.920303  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:45.123622  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:45.422460  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:45.623240  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:45.938878  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:46.121145  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:46.421462  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:46.621556  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:46.920539  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:47.123242  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:47.434774  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:47.623534  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:47.929223  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:48.125077  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:48.421704  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:48.623369  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:48.922650  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:49.123639  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:49.421456  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:49.624574  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:49.931049  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:50.124348  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:50.420556  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:50.622234  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:50.924025  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:51.124075  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:51.423011  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:51.623295  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:51.920670  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:52.121233  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:52.424341  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:52.621172  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:52.921299  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:53.121769  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:53.420110  251263 kapi.go:107] duration metric: took 1m12.504106807s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1121 23:48:53.621962  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:54.127660  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:54.626400  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:55.122945  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:55.724403  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:56.123402  251263 kapi.go:107] duration metric: took 1m11.506366647s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1121 23:48:56.125238  251263 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-266876 cluster.
	I1121 23:48:56.126693  251263 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1121 23:48:56.128133  251263 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1121 23:48:56.129655  251263 out.go:179] * Enabled addons: amd-gpu-device-plugin, inspektor-gadget, ingress-dns, registry-creds, nvidia-device-plugin, cloud-spanner, storage-provisioner, default-storageclass, metrics-server, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1121 23:48:56.131230  251263 addons.go:530] duration metric: took 1m25.815935443s for enable addons: enabled=[amd-gpu-device-plugin inspektor-gadget ingress-dns registry-creds nvidia-device-plugin cloud-spanner storage-provisioner default-storageclass metrics-server yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1121 23:48:56.131297  251263 start.go:247] waiting for cluster config update ...
	I1121 23:48:56.131318  251263 start.go:256] writing updated cluster config ...
	I1121 23:48:56.131603  251263 ssh_runner.go:195] Run: rm -f paused
	I1121 23:48:56.139138  251263 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 23:48:56.143255  251263 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-tgk67" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:48:56.149223  251263 pod_ready.go:94] pod "coredns-66bc5c9577-tgk67" is "Ready"
	I1121 23:48:56.149248  251263 pod_ready.go:86] duration metric: took 5.967724ms for pod "coredns-66bc5c9577-tgk67" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:48:56.152622  251263 pod_ready.go:83] waiting for pod "etcd-addons-266876" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:48:56.158325  251263 pod_ready.go:94] pod "etcd-addons-266876" is "Ready"
	I1121 23:48:56.158348  251263 pod_ready.go:86] duration metric: took 5.699178ms for pod "etcd-addons-266876" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:48:56.161017  251263 pod_ready.go:83] waiting for pod "kube-apiserver-addons-266876" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:48:56.165701  251263 pod_ready.go:94] pod "kube-apiserver-addons-266876" is "Ready"
	I1121 23:48:56.165731  251263 pod_ready.go:86] duration metric: took 4.68133ms for pod "kube-apiserver-addons-266876" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:48:56.167794  251263 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-266876" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:48:56.546100  251263 pod_ready.go:94] pod "kube-controller-manager-addons-266876" is "Ready"
	I1121 23:48:56.546140  251263 pod_ready.go:86] duration metric: took 378.321116ms for pod "kube-controller-manager-addons-266876" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:48:56.744763  251263 pod_ready.go:83] waiting for pod "kube-proxy-d6jsf" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:48:57.145028  251263 pod_ready.go:94] pod "kube-proxy-d6jsf" is "Ready"
	I1121 23:48:57.145065  251263 pod_ready.go:86] duration metric: took 400.263759ms for pod "kube-proxy-d6jsf" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:48:57.344109  251263 pod_ready.go:83] waiting for pod "kube-scheduler-addons-266876" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:48:57.744881  251263 pod_ready.go:94] pod "kube-scheduler-addons-266876" is "Ready"
	I1121 23:48:57.744924  251263 pod_ready.go:86] duration metric: took 400.779811ms for pod "kube-scheduler-addons-266876" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:48:57.744942  251263 pod_ready.go:40] duration metric: took 1.605761032s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 23:48:57.792759  251263 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1121 23:48:57.794548  251263 out.go:179] * Done! kubectl is now configured to use "addons-266876" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 21 23:55:42 addons-266876 crio[816]: time="2025-11-21 23:55:42.227686735Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763769342227657584,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:532066,},InodesUsed:&UInt64Value{Value:186,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=df8833fd-08ce-48b5-b7d9-118afab228e9 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 21 23:55:42 addons-266876 crio[816]: time="2025-11-21 23:55:42.228666072Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e728d3c6-69ac-4e86-8516-60efca6f20ab name=/runtime.v1.RuntimeService/ListContainers
	Nov 21 23:55:42 addons-266876 crio[816]: time="2025-11-21 23:55:42.228749335Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e728d3c6-69ac-4e86-8516-60efca6f20ab name=/runtime.v1.RuntimeService/ListContainers
	Nov 21 23:55:42 addons-266876 crio[816]: time="2025-11-21 23:55:42.229254477Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:991f92b0bd577e0738eef35d65b7c9638d3df53ccea140bf89bfa778f911574f,PodSandboxId:f7f9ecdee49d2fbed73c95266531e98528194cc29d1af1e15c07c6a5e790026a,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1763768959705557696,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9e3ed8c7-5788-4d41-aba1-71043fc65fb1,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1205f66bfddc482ac5d5dd1e86224c67e867616e61123b413f3bb6856473dc12,PodSandboxId:7a5080c12c12abdaab939ff714e1e919b116a416135fb5a4c0954b2d73aa77d6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1763768941325630164,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6b5956ac-11bb-458f-953a-f0fa68bf575e,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51813a3108d9e54ce1c3496176ac5114e7bd1188f2c3673f4a4a3480910eced6,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1763768914769733150,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:491a8ff7c586acede1f8b3b37821df605946465cc57d997a527260e81bc84cbe,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1763768913042350736,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62345e24511bafa136f68a223ce7ed0c511a449ccaac17d536939d218364c8e0,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1763768911235493555,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a47
1-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c36592147c998eee903b15108ec385188d6a10ba82bdbc75a1e806aedb354e7,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1763768910213861282,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b
a1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:552ab85d759ae6b592b4d62982e120e0f046fdad6cf73d39fa8e079973301b19,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1763768908471046022,Labels:map[string]string{io.kuber
netes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b904d30a44673800c0c3034a976f6ac03bbb3ec299f6d92bb1a5c6ea170a7c57,PodSandboxId:ff134a61cd64ed6f5542d7c8c8469ae269bd32b9fdf955f510ccf7e0b5589fb8,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:17637689071836483
11,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7dfe8e9b-0142-42cf-ba29-27aeadf91605,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4683ce225f87d35eb79e6cefdb9d7c48be7cc40e3230995b8b0843525d0bd27,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba
9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1763768905559322925,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea2e4d571c23b9bf7d4bfb72e3338c12264469ce70e7e72754afd69967515d13,PodSandboxId:5007bb0b80f021b12e7f6a9a43425ddf2f36685c53a522cd08d8b714c4ec20eb,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler
:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1763768903867338270,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79de7084-4282-49f8-a4d1-582323611ce3,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37dea366f964b8791137712d958e380263c762d6943592d23f145fad119cd6b5,PodSandboxId:1e73211f223b9745771ca2d0de7f25252821645aa5ecdbedb9590018486b8b7e,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1763768900663296805,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-gcprx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38cf49f5-ed6e-4aa5-bdfe-2494e5763f39,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16f748bb4b27c533f7b87016c8a98346dcf32afaef8d230d73d0764252cbb72f,PodSandboxId:1c267215c3e5b6dac05194130ddb58527757745ce779475a28dd1c84610883b7,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079
b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1763768900552896022,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-r57wx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 136bb70d-9950-46db-83d9-09b543dc4f72,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe7bb60492b04fd1b087025749469403adeef011272e8f3f22c00ada731cdcb3,PodSandboxId:15b64b5856939f8bac45fb57619eff6fafc932dc05ba97fe1943560b384bd630,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Ima
ge:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1763768898866225812,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-vl5f9,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 09a904d5-755f-4f1f-9525-b10e4e4b57a7,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62fac18e2a4ec25dc356850c017cf9e035abf1bb8a3d580cf0053ef797061409,PodSandboxId:f1662e37013476f1ac5ede7d21406a137d8e8672c36dcf49068d30172dc639f2,Metadata:&Container
Metadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763768860250918214,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2855a3de-b990-447c-b094-274b5becf1da,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d414f30f9b272529b258344187ca2317cbc5a4141f4ffe1bc6fa0f7df80bd5bb,PodSandboxId:79f2d64c3813a612bcd08d986b458be1fc1d2f0a3922d4db70b605b18db55f18,Metadata:&ContainerMetadata{Nam
e:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1763768857987378401,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-pd4sx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88fffae7-a3c2-46ef-a382-867c1f45dd2f,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e880e3438bfbb021c8b745fd3f9eff8bd901a71ba1ac5890af86a6f9ccce7198,PodSandboxId:9607023c4fe8e371d977e1e4a2b52b0e80675b763c0cd2b3ae209db0
0b96f2cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763768852094801650,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-tgk67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad56ae13-a7c4-44e3-a817-73aa300110b6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.cont
ainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ba59e7c8953d9801a0002a71d6b0d76ceb61d8240d18301e56bb626f422fb40,PodSandboxId:1ce41f042f494eef5d0be46b1db7d599bafdcc97fef69e0d7f1782bd275c54ee,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763768851042427654,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d6jsf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c9f1dbf-19b7-4f19-8f33-11b6886f1237,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernet
es.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d89e7dd43a03bbcb90d35b0084800be3d656bfa6da9cea1a1cc88c7dc51493d,PodSandboxId:a6e11d2b9834f78610e1487003d17adb09fe565622e235df1217303546294639,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763768839147876729,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-266876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d9c952f476472031bed61db83e3c978,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\
":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c5891e44197cae1d6761a70d34df341e353e30c01b0d5a7326a9560ace3813e,PodSandboxId:212b2600cae8f28bb69999f005d0b52383e46a97173ab430efc504c81b7dd8ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763768839130699997,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-266876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ca8736316ec035d06c4ec08eb70b85a,},Annotations:map[string]string{io.kubernetes.container.hash: e
9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b2349c8754b0dab18ba46ec5bfeab82bbf19463f511389a62129f35aebcece7,PodSandboxId:7fb7e928bee471d4ec16ce9fb677f233172c530080bee3cf5baf5b1c28826973,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763768839107116372,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-266876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: e0a4452486189aa3ae5b593dc3a43cac,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a216f1821ac9afeecc64f730374536676eea6c11ca4c8f9bde7913e3b74b219,PodSandboxId:43d68a4f9086a85d2e8c987fa17f958b95725f1d9f37b716010e99d58676be1a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763768839078348144,Labels:map[string]string{io.kubernetes.container.name: kube-contro
ller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-266876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14e0cb0275164bea778c164d97826c53,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e728d3c6-69ac-4e86-8516-60efca6f20ab name=/runtime.v1.RuntimeService/ListContainers
	Nov 21 23:55:42 addons-266876 crio[816]: time="2025-11-21 23:55:42.269749941Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=50a93153-a773-49dc-b43a-e7050bacc35c name=/runtime.v1.RuntimeService/Version
	Nov 21 23:55:42 addons-266876 crio[816]: time="2025-11-21 23:55:42.269853708Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=50a93153-a773-49dc-b43a-e7050bacc35c name=/runtime.v1.RuntimeService/Version
	Nov 21 23:55:42 addons-266876 crio[816]: time="2025-11-21 23:55:42.271650343Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=59489fb9-c8ad-4fb2-898d-f826cc659090 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 21 23:55:42 addons-266876 crio[816]: time="2025-11-21 23:55:42.273170711Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763769342273144379,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:532066,},InodesUsed:&UInt64Value{Value:186,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=59489fb9-c8ad-4fb2-898d-f826cc659090 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 21 23:55:42 addons-266876 crio[816]: time="2025-11-21 23:55:42.274174860Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8dea7db5-6624-427c-8d65-d381be1426c4 name=/runtime.v1.RuntimeService/ListContainers
	Nov 21 23:55:42 addons-266876 crio[816]: time="2025-11-21 23:55:42.274249193Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8dea7db5-6624-427c-8d65-d381be1426c4 name=/runtime.v1.RuntimeService/ListContainers
	Nov 21 23:55:42 addons-266876 crio[816]: time="2025-11-21 23:55:42.274698790Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:991f92b0bd577e0738eef35d65b7c9638d3df53ccea140bf89bfa778f911574f,PodSandboxId:f7f9ecdee49d2fbed73c95266531e98528194cc29d1af1e15c07c6a5e790026a,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1763768959705557696,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9e3ed8c7-5788-4d41-aba1-71043fc65fb1,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1205f66bfddc482ac5d5dd1e86224c67e867616e61123b413f3bb6856473dc12,PodSandboxId:7a5080c12c12abdaab939ff714e1e919b116a416135fb5a4c0954b2d73aa77d6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1763768941325630164,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6b5956ac-11bb-458f-953a-f0fa68bf575e,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51813a3108d9e54ce1c3496176ac5114e7bd1188f2c3673f4a4a3480910eced6,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1763768914769733150,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:491a8ff7c586acede1f8b3b37821df605946465cc57d997a527260e81bc84cbe,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1763768913042350736,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62345e24511bafa136f68a223ce7ed0c511a449ccaac17d536939d218364c8e0,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1763768911235493555,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a47
1-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c36592147c998eee903b15108ec385188d6a10ba82bdbc75a1e806aedb354e7,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1763768910213861282,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b
a1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:552ab85d759ae6b592b4d62982e120e0f046fdad6cf73d39fa8e079973301b19,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1763768908471046022,Labels:map[string]string{io.kuber
netes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b904d30a44673800c0c3034a976f6ac03bbb3ec299f6d92bb1a5c6ea170a7c57,PodSandboxId:ff134a61cd64ed6f5542d7c8c8469ae269bd32b9fdf955f510ccf7e0b5589fb8,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:17637689071836483
11,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7dfe8e9b-0142-42cf-ba29-27aeadf91605,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4683ce225f87d35eb79e6cefdb9d7c48be7cc40e3230995b8b0843525d0bd27,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba
9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1763768905559322925,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea2e4d571c23b9bf7d4bfb72e3338c12264469ce70e7e72754afd69967515d13,PodSandboxId:5007bb0b80f021b12e7f6a9a43425ddf2f36685c53a522cd08d8b714c4ec20eb,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler
:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1763768903867338270,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79de7084-4282-49f8-a4d1-582323611ce3,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37dea366f964b8791137712d958e380263c762d6943592d23f145fad119cd6b5,PodSandboxId:1e73211f223b9745771ca2d0de7f25252821645aa5ecdbedb9590018486b8b7e,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1763768900663296805,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-gcprx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38cf49f5-ed6e-4aa5-bdfe-2494e5763f39,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16f748bb4b27c533f7b87016c8a98346dcf32afaef8d230d73d0764252cbb72f,PodSandboxId:1c267215c3e5b6dac05194130ddb58527757745ce779475a28dd1c84610883b7,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079
b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1763768900552896022,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-r57wx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 136bb70d-9950-46db-83d9-09b543dc4f72,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe7bb60492b04fd1b087025749469403adeef011272e8f3f22c00ada731cdcb3,PodSandboxId:15b64b5856939f8bac45fb57619eff6fafc932dc05ba97fe1943560b384bd630,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Ima
ge:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1763768898866225812,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-vl5f9,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 09a904d5-755f-4f1f-9525-b10e4e4b57a7,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62fac18e2a4ec25dc356850c017cf9e035abf1bb8a3d580cf0053ef797061409,PodSandboxId:f1662e37013476f1ac5ede7d21406a137d8e8672c36dcf49068d30172dc639f2,Metadata:&Container
Metadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763768860250918214,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2855a3de-b990-447c-b094-274b5becf1da,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d414f30f9b272529b258344187ca2317cbc5a4141f4ffe1bc6fa0f7df80bd5bb,PodSandboxId:79f2d64c3813a612bcd08d986b458be1fc1d2f0a3922d4db70b605b18db55f18,Metadata:&ContainerMetadata{Nam
e:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1763768857987378401,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-pd4sx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88fffae7-a3c2-46ef-a382-867c1f45dd2f,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e880e3438bfbb021c8b745fd3f9eff8bd901a71ba1ac5890af86a6f9ccce7198,PodSandboxId:9607023c4fe8e371d977e1e4a2b52b0e80675b763c0cd2b3ae209db0
0b96f2cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763768852094801650,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-tgk67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad56ae13-a7c4-44e3-a817-73aa300110b6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.cont
ainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ba59e7c8953d9801a0002a71d6b0d76ceb61d8240d18301e56bb626f422fb40,PodSandboxId:1ce41f042f494eef5d0be46b1db7d599bafdcc97fef69e0d7f1782bd275c54ee,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763768851042427654,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d6jsf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c9f1dbf-19b7-4f19-8f33-11b6886f1237,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernet
es.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d89e7dd43a03bbcb90d35b0084800be3d656bfa6da9cea1a1cc88c7dc51493d,PodSandboxId:a6e11d2b9834f78610e1487003d17adb09fe565622e235df1217303546294639,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763768839147876729,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-266876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d9c952f476472031bed61db83e3c978,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\
":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c5891e44197cae1d6761a70d34df341e353e30c01b0d5a7326a9560ace3813e,PodSandboxId:212b2600cae8f28bb69999f005d0b52383e46a97173ab430efc504c81b7dd8ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763768839130699997,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-266876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ca8736316ec035d06c4ec08eb70b85a,},Annotations:map[string]string{io.kubernetes.container.hash: e
9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b2349c8754b0dab18ba46ec5bfeab82bbf19463f511389a62129f35aebcece7,PodSandboxId:7fb7e928bee471d4ec16ce9fb677f233172c530080bee3cf5baf5b1c28826973,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763768839107116372,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-266876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: e0a4452486189aa3ae5b593dc3a43cac,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a216f1821ac9afeecc64f730374536676eea6c11ca4c8f9bde7913e3b74b219,PodSandboxId:43d68a4f9086a85d2e8c987fa17f958b95725f1d9f37b716010e99d58676be1a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763768839078348144,Labels:map[string]string{io.kubernetes.container.name: kube-contro
ller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-266876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14e0cb0275164bea778c164d97826c53,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8dea7db5-6624-427c-8d65-d381be1426c4 name=/runtime.v1.RuntimeService/ListContainers
	Nov 21 23:55:42 addons-266876 crio[816]: time="2025-11-21 23:55:42.312493465Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=adbf479b-dc60-4582-a7c2-1eedb93a1822 name=/runtime.v1.RuntimeService/Version
	Nov 21 23:55:42 addons-266876 crio[816]: time="2025-11-21 23:55:42.312767375Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=adbf479b-dc60-4582-a7c2-1eedb93a1822 name=/runtime.v1.RuntimeService/Version
	Nov 21 23:55:42 addons-266876 crio[816]: time="2025-11-21 23:55:42.314389047Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=721baab9-e3c3-4a01-a842-7baab306ccc0 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 21 23:55:42 addons-266876 crio[816]: time="2025-11-21 23:55:42.317165770Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763769342317138454,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:532066,},InodesUsed:&UInt64Value{Value:186,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=721baab9-e3c3-4a01-a842-7baab306ccc0 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 21 23:55:42 addons-266876 crio[816]: time="2025-11-21 23:55:42.318563573Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cb64a22c-79d9-4337-b884-da277443ece6 name=/runtime.v1.RuntimeService/ListContainers
	Nov 21 23:55:42 addons-266876 crio[816]: time="2025-11-21 23:55:42.318694078Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cb64a22c-79d9-4337-b884-da277443ece6 name=/runtime.v1.RuntimeService/ListContainers
	Nov 21 23:55:42 addons-266876 crio[816]: time="2025-11-21 23:55:42.319187013Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:991f92b0bd577e0738eef35d65b7c9638d3df53ccea140bf89bfa778f911574f,PodSandboxId:f7f9ecdee49d2fbed73c95266531e98528194cc29d1af1e15c07c6a5e790026a,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1763768959705557696,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9e3ed8c7-5788-4d41-aba1-71043fc65fb1,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1205f66bfddc482ac5d5dd1e86224c67e867616e61123b413f3bb6856473dc12,PodSandboxId:7a5080c12c12abdaab939ff714e1e919b116a416135fb5a4c0954b2d73aa77d6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1763768941325630164,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6b5956ac-11bb-458f-953a-f0fa68bf575e,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51813a3108d9e54ce1c3496176ac5114e7bd1188f2c3673f4a4a3480910eced6,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1763768914769733150,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:491a8ff7c586acede1f8b3b37821df605946465cc57d997a527260e81bc84cbe,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1763768913042350736,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62345e24511bafa136f68a223ce7ed0c511a449ccaac17d536939d218364c8e0,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1763768911235493555,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a47
1-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c36592147c998eee903b15108ec385188d6a10ba82bdbc75a1e806aedb354e7,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1763768910213861282,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b
a1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:552ab85d759ae6b592b4d62982e120e0f046fdad6cf73d39fa8e079973301b19,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1763768908471046022,Labels:map[string]string{io.kuber
netes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b904d30a44673800c0c3034a976f6ac03bbb3ec299f6d92bb1a5c6ea170a7c57,PodSandboxId:ff134a61cd64ed6f5542d7c8c8469ae269bd32b9fdf955f510ccf7e0b5589fb8,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:17637689071836483
11,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7dfe8e9b-0142-42cf-ba29-27aeadf91605,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4683ce225f87d35eb79e6cefdb9d7c48be7cc40e3230995b8b0843525d0bd27,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba
9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1763768905559322925,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea2e4d571c23b9bf7d4bfb72e3338c12264469ce70e7e72754afd69967515d13,PodSandboxId:5007bb0b80f021b12e7f6a9a43425ddf2f36685c53a522cd08d8b714c4ec20eb,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler
:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1763768903867338270,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79de7084-4282-49f8-a4d1-582323611ce3,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37dea366f964b8791137712d958e380263c762d6943592d23f145fad119cd6b5,PodSandboxId:1e73211f223b9745771ca2d0de7f25252821645aa5ecdbedb9590018486b8b7e,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1763768900663296805,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-gcprx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38cf49f5-ed6e-4aa5-bdfe-2494e5763f39,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16f748bb4b27c533f7b87016c8a98346dcf32afaef8d230d73d0764252cbb72f,PodSandboxId:1c267215c3e5b6dac05194130ddb58527757745ce779475a28dd1c84610883b7,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079
b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1763768900552896022,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-r57wx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 136bb70d-9950-46db-83d9-09b543dc4f72,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe7bb60492b04fd1b087025749469403adeef011272e8f3f22c00ada731cdcb3,PodSandboxId:15b64b5856939f8bac45fb57619eff6fafc932dc05ba97fe1943560b384bd630,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Ima
ge:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1763768898866225812,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-vl5f9,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 09a904d5-755f-4f1f-9525-b10e4e4b57a7,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62fac18e2a4ec25dc356850c017cf9e035abf1bb8a3d580cf0053ef797061409,PodSandboxId:f1662e37013476f1ac5ede7d21406a137d8e8672c36dcf49068d30172dc639f2,Metadata:&Container
Metadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763768860250918214,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2855a3de-b990-447c-b094-274b5becf1da,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d414f30f9b272529b258344187ca2317cbc5a4141f4ffe1bc6fa0f7df80bd5bb,PodSandboxId:79f2d64c3813a612bcd08d986b458be1fc1d2f0a3922d4db70b605b18db55f18,Metadata:&ContainerMetadata{Nam
e:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1763768857987378401,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-pd4sx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88fffae7-a3c2-46ef-a382-867c1f45dd2f,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e880e3438bfbb021c8b745fd3f9eff8bd901a71ba1ac5890af86a6f9ccce7198,PodSandboxId:9607023c4fe8e371d977e1e4a2b52b0e80675b763c0cd2b3ae209db0
0b96f2cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763768852094801650,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-tgk67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad56ae13-a7c4-44e3-a817-73aa300110b6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.cont
ainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ba59e7c8953d9801a0002a71d6b0d76ceb61d8240d18301e56bb626f422fb40,PodSandboxId:1ce41f042f494eef5d0be46b1db7d599bafdcc97fef69e0d7f1782bd275c54ee,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763768851042427654,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d6jsf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c9f1dbf-19b7-4f19-8f33-11b6886f1237,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernet
es.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d89e7dd43a03bbcb90d35b0084800be3d656bfa6da9cea1a1cc88c7dc51493d,PodSandboxId:a6e11d2b9834f78610e1487003d17adb09fe565622e235df1217303546294639,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763768839147876729,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-266876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d9c952f476472031bed61db83e3c978,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\
":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c5891e44197cae1d6761a70d34df341e353e30c01b0d5a7326a9560ace3813e,PodSandboxId:212b2600cae8f28bb69999f005d0b52383e46a97173ab430efc504c81b7dd8ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763768839130699997,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-266876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ca8736316ec035d06c4ec08eb70b85a,},Annotations:map[string]string{io.kubernetes.container.hash: e
9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b2349c8754b0dab18ba46ec5bfeab82bbf19463f511389a62129f35aebcece7,PodSandboxId:7fb7e928bee471d4ec16ce9fb677f233172c530080bee3cf5baf5b1c28826973,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763768839107116372,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-266876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: e0a4452486189aa3ae5b593dc3a43cac,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a216f1821ac9afeecc64f730374536676eea6c11ca4c8f9bde7913e3b74b219,PodSandboxId:43d68a4f9086a85d2e8c987fa17f958b95725f1d9f37b716010e99d58676be1a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763768839078348144,Labels:map[string]string{io.kubernetes.container.name: kube-contro
ller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-266876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14e0cb0275164bea778c164d97826c53,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cb64a22c-79d9-4337-b884-da277443ece6 name=/runtime.v1.RuntimeService/ListContainers
	Nov 21 23:55:42 addons-266876 crio[816]: time="2025-11-21 23:55:42.357398111Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=db048105-c518-49c1-861f-c0f705561be9 name=/runtime.v1.RuntimeService/Version
	Nov 21 23:55:42 addons-266876 crio[816]: time="2025-11-21 23:55:42.357619407Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=db048105-c518-49c1-861f-c0f705561be9 name=/runtime.v1.RuntimeService/Version
	Nov 21 23:55:42 addons-266876 crio[816]: time="2025-11-21 23:55:42.359725655Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bbc9b1a3-d7e3-41ea-b5b0-8eec2961e948 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 21 23:55:42 addons-266876 crio[816]: time="2025-11-21 23:55:42.360912815Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763769342360882714,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:532066,},InodesUsed:&UInt64Value{Value:186,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bbc9b1a3-d7e3-41ea-b5b0-8eec2961e948 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 21 23:55:42 addons-266876 crio[816]: time="2025-11-21 23:55:42.361850605Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9e5094d5-8819-454c-9fff-435680e9c202 name=/runtime.v1.RuntimeService/ListContainers
	Nov 21 23:55:42 addons-266876 crio[816]: time="2025-11-21 23:55:42.362201005Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9e5094d5-8819-454c-9fff-435680e9c202 name=/runtime.v1.RuntimeService/ListContainers
	Nov 21 23:55:42 addons-266876 crio[816]: time="2025-11-21 23:55:42.363239233Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:991f92b0bd577e0738eef35d65b7c9638d3df53ccea140bf89bfa778f911574f,PodSandboxId:f7f9ecdee49d2fbed73c95266531e98528194cc29d1af1e15c07c6a5e790026a,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1763768959705557696,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9e3ed8c7-5788-4d41-aba1-71043fc65fb1,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1205f66bfddc482ac5d5dd1e86224c67e867616e61123b413f3bb6856473dc12,PodSandboxId:7a5080c12c12abdaab939ff714e1e919b116a416135fb5a4c0954b2d73aa77d6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1763768941325630164,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6b5956ac-11bb-458f-953a-f0fa68bf575e,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51813a3108d9e54ce1c3496176ac5114e7bd1188f2c3673f4a4a3480910eced6,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1763768914769733150,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:491a8ff7c586acede1f8b3b37821df605946465cc57d997a527260e81bc84cbe,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1763768913042350736,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62345e24511bafa136f68a223ce7ed0c511a449ccaac17d536939d218364c8e0,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1763768911235493555,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a47
1-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c36592147c998eee903b15108ec385188d6a10ba82bdbc75a1e806aedb354e7,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1763768910213861282,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b
a1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:552ab85d759ae6b592b4d62982e120e0f046fdad6cf73d39fa8e079973301b19,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1763768908471046022,Labels:map[string]string{io.kuber
netes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b904d30a44673800c0c3034a976f6ac03bbb3ec299f6d92bb1a5c6ea170a7c57,PodSandboxId:ff134a61cd64ed6f5542d7c8c8469ae269bd32b9fdf955f510ccf7e0b5589fb8,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:17637689071836483
11,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7dfe8e9b-0142-42cf-ba29-27aeadf91605,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4683ce225f87d35eb79e6cefdb9d7c48be7cc40e3230995b8b0843525d0bd27,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba
9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1763768905559322925,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea2e4d571c23b9bf7d4bfb72e3338c12264469ce70e7e72754afd69967515d13,PodSandboxId:5007bb0b80f021b12e7f6a9a43425ddf2f36685c53a522cd08d8b714c4ec20eb,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler
:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1763768903867338270,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79de7084-4282-49f8-a4d1-582323611ce3,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37dea366f964b8791137712d958e380263c762d6943592d23f145fad119cd6b5,PodSandboxId:1e73211f223b9745771ca2d0de7f25252821645aa5ecdbedb9590018486b8b7e,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1763768900663296805,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-gcprx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38cf49f5-ed6e-4aa5-bdfe-2494e5763f39,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16f748bb4b27c533f7b87016c8a98346dcf32afaef8d230d73d0764252cbb72f,PodSandboxId:1c267215c3e5b6dac05194130ddb58527757745ce779475a28dd1c84610883b7,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079
b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1763768900552896022,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-r57wx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 136bb70d-9950-46db-83d9-09b543dc4f72,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe7bb60492b04fd1b087025749469403adeef011272e8f3f22c00ada731cdcb3,PodSandboxId:15b64b5856939f8bac45fb57619eff6fafc932dc05ba97fe1943560b384bd630,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Ima
ge:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1763768898866225812,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-vl5f9,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 09a904d5-755f-4f1f-9525-b10e4e4b57a7,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62fac18e2a4ec25dc356850c017cf9e035abf1bb8a3d580cf0053ef797061409,PodSandboxId:f1662e37013476f1ac5ede7d21406a137d8e8672c36dcf49068d30172dc639f2,Metadata:&Container
Metadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763768860250918214,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2855a3de-b990-447c-b094-274b5becf1da,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d414f30f9b272529b258344187ca2317cbc5a4141f4ffe1bc6fa0f7df80bd5bb,PodSandboxId:79f2d64c3813a612bcd08d986b458be1fc1d2f0a3922d4db70b605b18db55f18,Metadata:&ContainerMetadata{Nam
e:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1763768857987378401,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-pd4sx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88fffae7-a3c2-46ef-a382-867c1f45dd2f,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e880e3438bfbb021c8b745fd3f9eff8bd901a71ba1ac5890af86a6f9ccce7198,PodSandboxId:9607023c4fe8e371d977e1e4a2b52b0e80675b763c0cd2b3ae209db0
0b96f2cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763768852094801650,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-tgk67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad56ae13-a7c4-44e3-a817-73aa300110b6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.cont
ainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ba59e7c8953d9801a0002a71d6b0d76ceb61d8240d18301e56bb626f422fb40,PodSandboxId:1ce41f042f494eef5d0be46b1db7d599bafdcc97fef69e0d7f1782bd275c54ee,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763768851042427654,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d6jsf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c9f1dbf-19b7-4f19-8f33-11b6886f1237,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernet
es.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d89e7dd43a03bbcb90d35b0084800be3d656bfa6da9cea1a1cc88c7dc51493d,PodSandboxId:a6e11d2b9834f78610e1487003d17adb09fe565622e235df1217303546294639,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763768839147876729,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-266876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d9c952f476472031bed61db83e3c978,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\
":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c5891e44197cae1d6761a70d34df341e353e30c01b0d5a7326a9560ace3813e,PodSandboxId:212b2600cae8f28bb69999f005d0b52383e46a97173ab430efc504c81b7dd8ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763768839130699997,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-266876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ca8736316ec035d06c4ec08eb70b85a,},Annotations:map[string]string{io.kubernetes.container.hash: e
9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b2349c8754b0dab18ba46ec5bfeab82bbf19463f511389a62129f35aebcece7,PodSandboxId:7fb7e928bee471d4ec16ce9fb677f233172c530080bee3cf5baf5b1c28826973,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763768839107116372,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-266876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: e0a4452486189aa3ae5b593dc3a43cac,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a216f1821ac9afeecc64f730374536676eea6c11ca4c8f9bde7913e3b74b219,PodSandboxId:43d68a4f9086a85d2e8c987fa17f958b95725f1d9f37b716010e99d58676be1a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763768839078348144,Labels:map[string]string{io.kubernetes.container.name: kube-contro
ller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-266876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14e0cb0275164bea778c164d97826c53,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9e5094d5-8819-454c-9fff-435680e9c202 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD                                       NAMESPACE
	991f92b0bd577       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                                              6 minutes ago       Running             nginx                                    0                   f7f9ecdee49d2       nginx                                     default
	1205f66bfddc4       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          6 minutes ago       Running             busybox                                  0                   7a5080c12c12a       busybox                                   default
	51813a3108d9e       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          7 minutes ago       Running             csi-snapshotter                          0                   d6163d79acc66       csi-hostpathplugin-gvwq9                  kube-system
	491a8ff7c586a       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          7 minutes ago       Running             csi-provisioner                          0                   d6163d79acc66       csi-hostpathplugin-gvwq9                  kube-system
	62345e24511ba       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            7 minutes ago       Running             liveness-probe                           0                   d6163d79acc66       csi-hostpathplugin-gvwq9                  kube-system
	4c36592147c99       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           7 minutes ago       Running             hostpath                                 0                   d6163d79acc66       csi-hostpathplugin-gvwq9                  kube-system
	552ab85d759ae       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                7 minutes ago       Running             node-driver-registrar                    0                   d6163d79acc66       csi-hostpathplugin-gvwq9                  kube-system
	b904d30a44673       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             7 minutes ago       Running             csi-attacher                             0                   ff134a61cd64e       csi-hostpath-attacher-0                   kube-system
	b4683ce225f87       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   7 minutes ago       Running             csi-external-health-monitor-controller   0                   d6163d79acc66       csi-hostpathplugin-gvwq9                  kube-system
	ea2e4d571c23b       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              7 minutes ago       Running             csi-resizer                              0                   5007bb0b80f02       csi-hostpath-resizer-0                    kube-system
	37dea366f964b       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      7 minutes ago       Running             volume-snapshot-controller               0                   1e73211f223b9       snapshot-controller-7d9fbc56b8-gcprx      kube-system
	16f748bb4b27c       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      7 minutes ago       Running             volume-snapshot-controller               0                   1c267215c3e5b       snapshot-controller-7d9fbc56b8-r57wx      kube-system
	fe7bb60492b04       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             7 minutes ago       Running             local-path-provisioner                   0                   15b64b5856939       local-path-provisioner-648f6765c9-vl5f9   local-path-storage
	62fac18e2a4ec       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             8 minutes ago       Running             storage-provisioner                      0                   f1662e3701347       storage-provisioner                       kube-system
	d414f30f9b272       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     8 minutes ago       Running             amd-gpu-device-plugin                    0                   79f2d64c3813a       amd-gpu-device-plugin-pd4sx               kube-system
	e880e3438bfbb       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             8 minutes ago       Running             coredns                                  0                   9607023c4fe8e       coredns-66bc5c9577-tgk67                  kube-system
	9ba59e7c8953d       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             8 minutes ago       Running             kube-proxy                               0                   1ce41f042f494       kube-proxy-d6jsf                          kube-system
	8d89e7dd43a03       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             8 minutes ago       Running             kube-scheduler                           0                   a6e11d2b9834f       kube-scheduler-addons-266876              kube-system
	5c5891e44197c       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             8 minutes ago       Running             etcd                                     0                   212b2600cae8f       etcd-addons-266876                        kube-system
	9b2349c8754b0       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             8 minutes ago       Running             kube-apiserver                           0                   7fb7e928bee47       kube-apiserver-addons-266876              kube-system
	3a216f1821ac9       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             8 minutes ago       Running             kube-controller-manager                  0                   43d68a4f9086a       kube-controller-manager-addons-266876     kube-system
	
	
	==> coredns [e880e3438bfbb021c8b745fd3f9eff8bd901a71ba1ac5890af86a6f9ccce7198] <==
	[INFO] 10.244.0.22:50347 - 35168 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000108631s
	[INFO] 10.244.0.22:50347 - 54877 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000216081s
	[INFO] 10.244.0.22:50347 - 43294 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000179721s
	[INFO] 10.244.0.22:50347 - 27330 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000599569s
	[INFO] 10.244.0.22:60336 - 12888 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00056772s
	[INFO] 10.244.0.22:60336 - 36795 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000091403s
	[INFO] 10.244.0.22:60336 - 25266 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000099994s
	[INFO] 10.244.0.22:60336 - 33320 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000197786s
	[INFO] 10.244.0.22:60336 - 45387 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000148602s
	[INFO] 10.244.0.22:60336 - 51395 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000280512s
	[INFO] 10.244.0.22:60336 - 17580 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00009954s
	[INFO] 10.244.0.22:39320 - 9584 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000406115s
	[INFO] 10.244.0.22:53756 - 38444 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000126906s
	[INFO] 10.244.0.22:39320 - 47717 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000763646s
	[INFO] 10.244.0.22:39320 - 18735 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000161717s
	[INFO] 10.244.0.22:39320 - 58264 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000251885s
	[INFO] 10.244.0.22:39320 - 54900 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000138827s
	[INFO] 10.244.0.22:39320 - 7817 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000170117s
	[INFO] 10.244.0.22:53756 - 5097 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000284667s
	[INFO] 10.244.0.22:39320 - 60449 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000242395s
	[INFO] 10.244.0.22:53756 - 6963 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000095016s
	[INFO] 10.244.0.22:53756 - 9121 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000092889s
	[INFO] 10.244.0.22:53756 - 50282 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000104129s
	[INFO] 10.244.0.22:53756 - 63714 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000089905s
	[INFO] 10.244.0.22:53756 - 29550 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000102334s
	
	
	==> describe nodes <==
	Name:               addons-266876
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-266876
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785
	                    minikube.k8s.io/name=addons-266876
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_21T23_47_26_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-266876
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-266876"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Nov 2025 23:47:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-266876
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Nov 2025 23:55:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Nov 2025 23:52:41 +0000   Fri, 21 Nov 2025 23:47:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Nov 2025 23:52:41 +0000   Fri, 21 Nov 2025 23:47:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Nov 2025 23:52:41 +0000   Fri, 21 Nov 2025 23:47:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Nov 2025 23:52:41 +0000   Fri, 21 Nov 2025 23:47:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.50
	  Hostname:    addons-266876
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 c4a95d5c27154bec8bc2a50909bf4217
	  System UUID:                c4a95d5c-2715-4bec-8bc2-a50909bf4217
	  Boot ID:                    7afcec11-c11b-4436-b252-c2dac139e51f
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (18 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m44s
	  default                     hello-world-app-5d498dc89-sqvxb            0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m58s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m26s
	  default                     task-pv-pod                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m1s
	  kube-system                 amd-gpu-device-plugin-pd4sx                0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m9s
	  kube-system                 coredns-66bc5c9577-tgk67                   100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     8m12s
	  kube-system                 csi-hostpath-attacher-0                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m
	  kube-system                 csi-hostpath-resizer-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m
	  kube-system                 csi-hostpathplugin-gvwq9                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m
	  kube-system                 etcd-addons-266876                         100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         8m18s
	  kube-system                 kube-apiserver-addons-266876               250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m17s
	  kube-system                 kube-controller-manager-addons-266876      200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m17s
	  kube-system                 kube-proxy-d6jsf                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m12s
	  kube-system                 kube-scheduler-addons-266876               100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m17s
	  kube-system                 snapshot-controller-7d9fbc56b8-gcprx       0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m1s
	  kube-system                 snapshot-controller-7d9fbc56b8-r57wx       0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m1s
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m5s
	  local-path-storage          local-path-provisioner-648f6765c9-vl5f9    0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m10s                  kube-proxy       
	  Normal  Starting                 8m24s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m24s (x8 over 8m24s)  kubelet          Node addons-266876 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m24s (x8 over 8m24s)  kubelet          Node addons-266876 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m24s (x7 over 8m24s)  kubelet          Node addons-266876 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 8m17s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m17s                  kubelet          Node addons-266876 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m17s                  kubelet          Node addons-266876 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m17s                  kubelet          Node addons-266876 status is now: NodeHasSufficientPID
	  Normal  NodeReady                8m16s                  kubelet          Node addons-266876 status is now: NodeReady
	  Normal  RegisteredNode           8m13s                  node-controller  Node addons-266876 event: Registered Node addons-266876 in Controller
	
	
	==> dmesg <==
	[  +1.386553] kauditd_printk_skb: 314 callbacks suppressed
	[  +3.245635] kauditd_printk_skb: 404 callbacks suppressed
	[  +8.078733] kauditd_printk_skb: 5 callbacks suppressed
	[Nov21 23:48] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.490595] kauditd_printk_skb: 26 callbacks suppressed
	[  +6.260482] kauditd_printk_skb: 38 callbacks suppressed
	[  +5.041216] kauditd_printk_skb: 113 callbacks suppressed
	[  +5.004515] kauditd_printk_skb: 80 callbacks suppressed
	[  +3.836804] kauditd_printk_skb: 136 callbacks suppressed
	[  +4.200452] kauditd_printk_skb: 82 callbacks suppressed
	[  +0.000029] kauditd_printk_skb: 5 callbacks suppressed
	[  +0.000031] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.254098] kauditd_printk_skb: 53 callbacks suppressed
	[Nov21 23:49] kauditd_printk_skb: 47 callbacks suppressed
	[  +9.475817] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.686428] kauditd_printk_skb: 22 callbacks suppressed
	[  +4.598673] kauditd_printk_skb: 95 callbacks suppressed
	[  +1.253211] kauditd_printk_skb: 79 callbacks suppressed
	[  +0.652321] kauditd_printk_skb: 101 callbacks suppressed
	[  +0.000031] kauditd_printk_skb: 20 callbacks suppressed
	[  +9.880165] kauditd_printk_skb: 114 callbacks suppressed
	[Nov21 23:51] kauditd_printk_skb: 22 callbacks suppressed
	[  +0.811687] kauditd_printk_skb: 51 callbacks suppressed
	[  +3.142587] kauditd_printk_skb: 10 callbacks suppressed
	[Nov21 23:52] kauditd_printk_skb: 49 callbacks suppressed
	
	
	==> etcd [5c5891e44197cae1d6761a70d34df341e353e30c01b0d5a7326a9560ace3813e] <==
	{"level":"info","ts":"2025-11-21T23:47:56.169034Z","caller":"traceutil/trace.go:172","msg":"trace[503038924] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:933; }","duration":"252.581534ms","start":"2025-11-21T23:47:55.916448Z","end":"2025-11-21T23:47:56.169029Z","steps":["trace[503038924] 'agreement among raft nodes before linearized reading'  (duration: 252.55235ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-21T23:47:59.352083Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:47:59.363648Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:47:59.513514Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:47:59.589561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57252","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-21T23:48:08.513782Z","caller":"traceutil/trace.go:172","msg":"trace[715400112] transaction","detail":"{read_only:false; response_revision:978; number_of_response:1; }","duration":"116.418162ms","start":"2025-11-21T23:48:08.397351Z","end":"2025-11-21T23:48:08.513770Z","steps":["trace[715400112] 'process raft request'  (duration: 116.119443ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-21T23:48:10.824125Z","caller":"traceutil/trace.go:172","msg":"trace[2036679806] linearizableReadLoop","detail":"{readStateIndex:1014; appliedIndex:1014; }","duration":"203.849321ms","start":"2025-11-21T23:48:10.620261Z","end":"2025-11-21T23:48:10.824110Z","steps":["trace[2036679806] 'read index received'  (duration: 203.843953ms)","trace[2036679806] 'applied index is now lower than readState.Index'  (duration: 4.512µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-21T23:48:10.824235Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"203.952821ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-21T23:48:10.824255Z","caller":"traceutil/trace.go:172","msg":"trace[1038609178] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:986; }","duration":"203.992763ms","start":"2025-11-21T23:48:10.620257Z","end":"2025-11-21T23:48:10.824249Z","steps":["trace[1038609178] 'agreement among raft nodes before linearized reading'  (duration: 203.924903ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-21T23:48:10.827067Z","caller":"traceutil/trace.go:172","msg":"trace[958942931] transaction","detail":"{read_only:false; response_revision:987; number_of_response:1; }","duration":"216.790232ms","start":"2025-11-21T23:48:10.610267Z","end":"2025-11-21T23:48:10.827057Z","steps":["trace[958942931] 'process raft request'  (duration: 213.950708ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-21T23:48:17.235529Z","caller":"traceutil/trace.go:172","msg":"trace[2072959660] linearizableReadLoop","detail":"{readStateIndex:1040; appliedIndex:1040; }","duration":"118.859084ms","start":"2025-11-21T23:48:17.116651Z","end":"2025-11-21T23:48:17.235510Z","steps":["trace[2072959660] 'read index received'  (duration: 118.853824ms)","trace[2072959660] 'applied index is now lower than readState.Index'  (duration: 4.479µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-21T23:48:17.235633Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"118.964818ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-21T23:48:17.235650Z","caller":"traceutil/trace.go:172","msg":"trace[1291312129] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1011; }","duration":"118.997232ms","start":"2025-11-21T23:48:17.116647Z","end":"2025-11-21T23:48:17.235645Z","steps":["trace[1291312129] 'agreement among raft nodes before linearized reading'  (duration: 118.929178ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-21T23:48:17.236014Z","caller":"traceutil/trace.go:172","msg":"trace[409496112] transaction","detail":"{read_only:false; response_revision:1012; number_of_response:1; }","duration":"245.19274ms","start":"2025-11-21T23:48:16.990813Z","end":"2025-11-21T23:48:17.236006Z","steps":["trace[409496112] 'process raft request'  (duration: 245.052969ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-21T23:48:20.410362Z","caller":"traceutil/trace.go:172","msg":"trace[828505748] transaction","detail":"{read_only:false; response_revision:1027; number_of_response:1; }","duration":"157.893848ms","start":"2025-11-21T23:48:20.252456Z","end":"2025-11-21T23:48:20.410350Z","steps":["trace[828505748] 'process raft request'  (duration: 157.749487ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-21T23:48:26.972869Z","caller":"traceutil/trace.go:172","msg":"trace[583749754] transaction","detail":"{read_only:false; response_revision:1085; number_of_response:1; }","duration":"180.54926ms","start":"2025-11-21T23:48:26.792295Z","end":"2025-11-21T23:48:26.972845Z","steps":["trace[583749754] 'process raft request'  (duration: 180.444491ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-21T23:48:55.718332Z","caller":"traceutil/trace.go:172","msg":"trace[218102785] linearizableReadLoop","detail":"{readStateIndex:1235; appliedIndex:1235; }","duration":"102.447461ms","start":"2025-11-21T23:48:55.615863Z","end":"2025-11-21T23:48:55.718310Z","steps":["trace[218102785] 'read index received'  (duration: 102.442519ms)","trace[218102785] 'applied index is now lower than readState.Index'  (duration: 4.145µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-21T23:48:55.718517Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"102.662851ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-21T23:48:55.718556Z","caller":"traceutil/trace.go:172","msg":"trace[280205783] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1197; }","duration":"102.741104ms","start":"2025-11-21T23:48:55.615807Z","end":"2025-11-21T23:48:55.718548Z","steps":["trace[280205783] 'agreement among raft nodes before linearized reading'  (duration: 102.634025ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-21T23:48:55.718853Z","caller":"traceutil/trace.go:172","msg":"trace[1563407473] transaction","detail":"{read_only:false; response_revision:1198; number_of_response:1; }","duration":"160.082369ms","start":"2025-11-21T23:48:55.558762Z","end":"2025-11-21T23:48:55.718844Z","steps":["trace[1563407473] 'process raft request'  (duration: 160.006081ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-21T23:49:25.230279Z","caller":"traceutil/trace.go:172","msg":"trace[1746671191] transaction","detail":"{read_only:false; response_revision:1422; number_of_response:1; }","duration":"130.337483ms","start":"2025-11-21T23:49:25.099914Z","end":"2025-11-21T23:49:25.230251Z","steps":["trace[1746671191] 'process raft request'  (duration: 128.456166ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-21T23:49:31.443123Z","caller":"traceutil/trace.go:172","msg":"trace[1229097043] linearizableReadLoop","detail":"{readStateIndex:1512; appliedIndex:1512; }","duration":"121.2839ms","start":"2025-11-21T23:49:31.321821Z","end":"2025-11-21T23:49:31.443104Z","steps":["trace[1229097043] 'read index received'  (duration: 121.277728ms)","trace[1229097043] 'applied index is now lower than readState.Index'  (duration: 4.966µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-21T23:49:31.443287Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"121.446592ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-21T23:49:31.443311Z","caller":"traceutil/trace.go:172","msg":"trace[1460275697] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1465; }","duration":"121.507541ms","start":"2025-11-21T23:49:31.321797Z","end":"2025-11-21T23:49:31.443305Z","steps":["trace[1460275697] 'agreement among raft nodes before linearized reading'  (duration: 121.416565ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-21T23:49:31.444122Z","caller":"traceutil/trace.go:172","msg":"trace[1873839518] transaction","detail":"{read_only:false; response_revision:1466; number_of_response:1; }","duration":"152.736081ms","start":"2025-11-21T23:49:31.291375Z","end":"2025-11-21T23:49:31.444111Z","steps":["trace[1873839518] 'process raft request'  (duration: 152.523387ms)"],"step_count":1}
	
	
	==> kernel <==
	 23:55:42 up 8 min,  0 users,  load average: 0.15, 0.69, 0.58
	Linux addons-266876 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Nov 19 01:10:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [9b2349c8754b0dab18ba46ec5bfeab82bbf19463f511389a62129f35aebcece7] <==
	W1121 23:47:42.366369       1 logging.go:55] [core] [Channel #259 SubChannel #260]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1121 23:47:42.410842       1 logging.go:55] [core] [Channel #263 SubChannel #264]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1121 23:47:42.649275       1 alloc.go:328] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.103.205.35"}
	I1121 23:47:44.226888       1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.96.203.27"}
	W1121 23:47:59.343614       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1121 23:47:59.366318       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1121 23:47:59.513667       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1121 23:47:59.564438       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1121 23:48:11.667772       1 handler_proxy.go:99] no RequestInfo found in the context
	E1121 23:48:11.669231       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.6.129:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.6.129:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.6.129:443: connect: connection refused" logger="UnhandledError"
	E1121 23:48:11.670277       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1121 23:48:11.672393       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.6.129:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.6.129:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.6.129:443: connect: connection refused" logger="UnhandledError"
	E1121 23:48:11.677441       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.6.129:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.6.129:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.6.129:443: connect: connection refused" logger="UnhandledError"
	E1121 23:48:11.699611       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.6.129:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.6.129:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.6.129:443: connect: connection refused" logger="UnhandledError"
	I1121 23:48:11.830392       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1121 23:49:07.600969       1 conn.go:339] Error on socket receive: read tcp 192.168.39.50:8443->192.168.39.1:39632: use of closed network connection
	E1121 23:49:07.806030       1 conn.go:339] Error on socket receive: read tcp 192.168.39.50:8443->192.168.39.1:39646: use of closed network connection
	I1121 23:49:16.529402       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1121 23:49:16.732737       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.100.151.240"}
	I1121 23:49:17.182251       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.97.21.116"}
	I1121 23:50:12.699613       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1121 23:51:44.509660       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.111.217.27"}
	
	
	==> kube-controller-manager [3a216f1821ac9afeecc64f730374536676eea6c11ca4c8f9bde7913e3b74b219] <==
	I1121 23:47:29.337520       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1121 23:47:29.337577       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1121 23:47:29.337666       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1121 23:47:29.338254       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1121 23:47:29.338833       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1121 23:47:29.339107       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1121 23:47:29.340477       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1121 23:47:29.340506       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1121 23:47:29.341040       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1121 23:47:29.343803       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1121 23:47:29.357152       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1121 23:47:29.371508       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1121 23:47:37.577649       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1121 23:47:59.325689       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1121 23:47:59.326487       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1121 23:47:59.326701       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1121 23:47:59.433161       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1121 23:47:59.436132       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 23:47:59.460324       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1121 23:47:59.669783       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1121 23:49:21.118711       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gcp-auth"
	I1121 23:49:39.996446       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gadget"
	I1121 23:49:43.075346       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	I1121 23:49:50.779389       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	I1121 23:51:58.890395       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="ingress-nginx"
	
	
	==> kube-proxy [9ba59e7c8953d9801a0002a71d6b0d76ceb61d8240d18301e56bb626f422fb40] <==
	I1121 23:47:31.549237       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1121 23:47:31.651147       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1121 23:47:31.651198       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.50"]
	E1121 23:47:31.651275       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1121 23:47:31.974605       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1121 23:47:31.975156       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1121 23:47:31.975763       1 server_linux.go:132] "Using iptables Proxier"
	I1121 23:47:32.024377       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1121 23:47:32.026629       1 server.go:527] "Version info" version="v1.34.1"
	I1121 23:47:32.026711       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 23:47:32.034053       1 config.go:200] "Starting service config controller"
	I1121 23:47:32.034241       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1121 23:47:32.034262       1 config.go:106] "Starting endpoint slice config controller"
	I1121 23:47:32.034266       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1121 23:47:32.034276       1 config.go:403] "Starting serviceCIDR config controller"
	I1121 23:47:32.034279       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1121 23:47:32.039494       1 config.go:309] "Starting node config controller"
	I1121 23:47:32.039506       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1121 23:47:32.039512       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1121 23:47:32.134526       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1121 23:47:32.134549       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1121 23:47:32.134580       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [8d89e7dd43a03bbcb90d35b0084800be3d656bfa6da9cea1a1cc88c7dc51493d] <==
	E1121 23:47:22.475530       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1121 23:47:22.475591       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1121 23:47:22.475644       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1121 23:47:22.475674       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1121 23:47:22.475781       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1121 23:47:22.475833       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1121 23:47:22.475877       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1121 23:47:22.476028       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1121 23:47:22.476096       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1121 23:47:23.318227       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1121 23:47:23.496497       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1121 23:47:23.525267       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1121 23:47:23.575530       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1121 23:47:23.578656       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1121 23:47:23.593013       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1121 23:47:23.593144       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1121 23:47:23.685009       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1121 23:47:23.695610       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1121 23:47:23.719024       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1121 23:47:23.735984       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1121 23:47:23.781311       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1121 23:47:23.797047       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1121 23:47:23.818758       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1121 23:47:23.836424       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I1121 23:47:26.255559       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 21 23:55:05 addons-266876 kubelet[1502]: E1121 23:55:05.929114    1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763769305928516510  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:532066}  inodes_used:{value:186}}"
	Nov 21 23:55:05 addons-266876 kubelet[1502]: E1121 23:55:05.929441    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763769305928516510  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:532066}  inodes_used:{value:186}}"
	Nov 21 23:55:06 addons-266876 kubelet[1502]: E1121 23:55:06.404235    1502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kicbase/echo-server:1.0\\\": ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-world-app-5d498dc89-sqvxb" podUID="06b9a800-a9fc-4174-8e6f-34e5c7b7563b"
	Nov 21 23:55:10 addons-266876 kubelet[1502]: I1121 23:55:10.400493    1502 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Nov 21 23:55:14 addons-266876 kubelet[1502]: I1121 23:55:14.400613    1502 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-pd4sx" secret="" err="secret \"gcp-auth\" not found"
	Nov 21 23:55:15 addons-266876 kubelet[1502]: E1121 23:55:15.932753    1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763769315932195652  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:532066}  inodes_used:{value:186}}"
	Nov 21 23:55:15 addons-266876 kubelet[1502]: E1121 23:55:15.933263    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763769315932195652  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:532066}  inodes_used:{value:186}}"
	Nov 21 23:55:24 addons-266876 kubelet[1502]: E1121 23:55:24.580009    1502 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Nov 21 23:55:24 addons-266876 kubelet[1502]: E1121 23:55:24.580089    1502 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Nov 21 23:55:24 addons-266876 kubelet[1502]: E1121 23:55:24.580296    1502 kuberuntime_manager.go:1449] "Unhandled Error" err="container helper-pod start failed in pod helper-pod-create-pvc-b9a0f343-b333-4fad-87d0-620f3a86218a_local-path-storage(5b56ac87-ee47-4db4-9910-2c199e439aec): ErrImagePull: reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Nov 21 23:55:24 addons-266876 kubelet[1502]: E1121 23:55:24.580336    1502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ErrImagePull: \"reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-b9a0f343-b333-4fad-87d0-620f3a86218a" podUID="5b56ac87-ee47-4db4-9910-2c199e439aec"
	Nov 21 23:55:25 addons-266876 kubelet[1502]: I1121 23:55:25.602244    1502 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/5b56ac87-ee47-4db4-9910-2c199e439aec-data\") pod \"5b56ac87-ee47-4db4-9910-2c199e439aec\" (UID: \"5b56ac87-ee47-4db4-9910-2c199e439aec\") "
	Nov 21 23:55:25 addons-266876 kubelet[1502]: I1121 23:55:25.602317    1502 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wf6r5\" (UniqueName: \"kubernetes.io/projected/5b56ac87-ee47-4db4-9910-2c199e439aec-kube-api-access-wf6r5\") pod \"5b56ac87-ee47-4db4-9910-2c199e439aec\" (UID: \"5b56ac87-ee47-4db4-9910-2c199e439aec\") "
	Nov 21 23:55:25 addons-266876 kubelet[1502]: I1121 23:55:25.602344    1502 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/5b56ac87-ee47-4db4-9910-2c199e439aec-script\") pod \"5b56ac87-ee47-4db4-9910-2c199e439aec\" (UID: \"5b56ac87-ee47-4db4-9910-2c199e439aec\") "
	Nov 21 23:55:25 addons-266876 kubelet[1502]: I1121 23:55:25.602352    1502 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b56ac87-ee47-4db4-9910-2c199e439aec-data" (OuterVolumeSpecName: "data") pod "5b56ac87-ee47-4db4-9910-2c199e439aec" (UID: "5b56ac87-ee47-4db4-9910-2c199e439aec"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Nov 21 23:55:25 addons-266876 kubelet[1502]: I1121 23:55:25.602448    1502 reconciler_common.go:299] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/5b56ac87-ee47-4db4-9910-2c199e439aec-data\") on node \"addons-266876\" DevicePath \"\""
	Nov 21 23:55:25 addons-266876 kubelet[1502]: I1121 23:55:25.602740    1502 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b56ac87-ee47-4db4-9910-2c199e439aec-script" (OuterVolumeSpecName: "script") pod "5b56ac87-ee47-4db4-9910-2c199e439aec" (UID: "5b56ac87-ee47-4db4-9910-2c199e439aec"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	Nov 21 23:55:25 addons-266876 kubelet[1502]: I1121 23:55:25.604842    1502 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b56ac87-ee47-4db4-9910-2c199e439aec-kube-api-access-wf6r5" (OuterVolumeSpecName: "kube-api-access-wf6r5") pod "5b56ac87-ee47-4db4-9910-2c199e439aec" (UID: "5b56ac87-ee47-4db4-9910-2c199e439aec"). InnerVolumeSpecName "kube-api-access-wf6r5". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 21 23:55:25 addons-266876 kubelet[1502]: I1121 23:55:25.703520    1502 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wf6r5\" (UniqueName: \"kubernetes.io/projected/5b56ac87-ee47-4db4-9910-2c199e439aec-kube-api-access-wf6r5\") on node \"addons-266876\" DevicePath \"\""
	Nov 21 23:55:25 addons-266876 kubelet[1502]: I1121 23:55:25.703559    1502 reconciler_common.go:299] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/5b56ac87-ee47-4db4-9910-2c199e439aec-script\") on node \"addons-266876\" DevicePath \"\""
	Nov 21 23:55:25 addons-266876 kubelet[1502]: E1121 23:55:25.935426    1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763769325935091611  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:532066}  inodes_used:{value:186}}"
	Nov 21 23:55:25 addons-266876 kubelet[1502]: E1121 23:55:25.935470    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763769325935091611  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:532066}  inodes_used:{value:186}}"
	Nov 21 23:55:27 addons-266876 kubelet[1502]: I1121 23:55:27.405371    1502 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b56ac87-ee47-4db4-9910-2c199e439aec" path="/var/lib/kubelet/pods/5b56ac87-ee47-4db4-9910-2c199e439aec/volumes"
	Nov 21 23:55:35 addons-266876 kubelet[1502]: E1121 23:55:35.938246    1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763769335937422350  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:532066}  inodes_used:{value:186}}"
	Nov 21 23:55:35 addons-266876 kubelet[1502]: E1121 23:55:35.938268    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763769335937422350  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:532066}  inodes_used:{value:186}}"
	
	
	==> storage-provisioner [62fac18e2a4ec25dc356850c017cf9e035abf1bb8a3d580cf0053ef797061409] <==
	W1121 23:55:18.109356       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:55:20.112873       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:55:20.118832       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:55:22.123133       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:55:22.131478       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:55:24.134763       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:55:24.141622       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:55:26.146193       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:55:26.151531       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:55:28.155231       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:55:28.161710       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:55:30.165187       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:55:30.170164       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:55:32.175367       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:55:32.184054       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:55:34.188102       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:55:34.193494       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:55:36.199873       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:55:36.208330       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:55:38.212196       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:55:38.221148       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:55:40.224349       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:55:40.231780       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:55:42.235521       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:55:42.246352       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-266876 -n addons-266876
helpers_test.go:269: (dbg) Run:  kubectl --context addons-266876 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-sqvxb task-pv-pod test-local-path
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/CSI]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-266876 describe pod hello-world-app-5d498dc89-sqvxb task-pv-pod test-local-path
helpers_test.go:290: (dbg) kubectl --context addons-266876 describe pod hello-world-app-5d498dc89-sqvxb task-pv-pod test-local-path:

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-sqvxb
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-266876/192.168.39.50
	Start Time:       Fri, 21 Nov 2025 23:51:44 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.30
	IPs:
	  IP:           10.244.0.30
	Controlled By:  ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dhdwl (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-dhdwl:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  3m59s                default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-sqvxb to addons-266876
	  Warning  Failed     49s (x2 over 2m49s)  kubelet            Failed to pull image "docker.io/kicbase/echo-server:1.0": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     49s (x2 over 2m49s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    37s (x2 over 2m49s)  kubelet            Back-off pulling image "docker.io/kicbase/echo-server:1.0"
	  Warning  Failed     37s (x2 over 2m49s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    26s (x3 over 3m59s)  kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-266876/192.168.39.50
	Start Time:       Fri, 21 Nov 2025 23:49:41 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.29
	IPs:
	  IP:  10.244.0.29
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP (http-server)
	    Host Port:      0/TCP (http-server)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cj5dd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-cj5dd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  6m2s                  default-scheduler  Successfully assigned default/task-pv-pod to addons-266876
	  Warning  Failed     4m34s                 kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     109s (x3 over 4m34s)  kubelet            Error: ErrImagePull
	  Warning  Failed     109s (x2 over 3m34s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    82s (x4 over 4m34s)   kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     82s (x4 over 4m34s)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling    67s (x4 over 6m2s)    kubelet            Pulling image "docker.io/nginx"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-24fvr (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-24fvr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestAddons/parallel/CSI FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-266876 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-266876 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-266876 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.859904028s)
--- FAIL: TestAddons/parallel/CSI (372.49s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (302.53s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-266876 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-266876 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266876 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:960: failed waiting for PVC test-pvc: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/LocalPath]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-266876 -n addons-266876
helpers_test.go:252: <<< TestAddons/parallel/LocalPath FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/LocalPath]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-266876 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-266876 logs -n 25: (1.220024084s)
helpers_test.go:260: TestAddons/parallel/LocalPath logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-263491                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-263491 │ jenkins │ v1.37.0 │ 21 Nov 25 23:46 UTC │ 21 Nov 25 23:46 UTC │
	│ delete  │ -p download-only-246895                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-246895 │ jenkins │ v1.37.0 │ 21 Nov 25 23:46 UTC │ 21 Nov 25 23:46 UTC │
	│ delete  │ -p download-only-263491                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-263491 │ jenkins │ v1.37.0 │ 21 Nov 25 23:46 UTC │ 21 Nov 25 23:46 UTC │
	│ start   │ --download-only -p binary-mirror-996598 --alsologtostderr --binary-mirror http://127.0.0.1:41123 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-996598 │ jenkins │ v1.37.0 │ 21 Nov 25 23:46 UTC │                     │
	│ delete  │ -p binary-mirror-996598                                                                                                                                                                                                                                                                                                                                                                                                                                                │ binary-mirror-996598 │ jenkins │ v1.37.0 │ 21 Nov 25 23:46 UTC │ 21 Nov 25 23:46 UTC │
	│ addons  │ enable dashboard -p addons-266876                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-266876        │ jenkins │ v1.37.0 │ 21 Nov 25 23:46 UTC │                     │
	│ addons  │ disable dashboard -p addons-266876                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-266876        │ jenkins │ v1.37.0 │ 21 Nov 25 23:46 UTC │                     │
	│ start   │ -p addons-266876 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-266876        │ jenkins │ v1.37.0 │ 21 Nov 25 23:46 UTC │ 21 Nov 25 23:48 UTC │
	│ addons  │ addons-266876 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-266876        │ jenkins │ v1.37.0 │ 21 Nov 25 23:48 UTC │ 21 Nov 25 23:48 UTC │
	│ addons  │ addons-266876 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-266876        │ jenkins │ v1.37.0 │ 21 Nov 25 23:49 UTC │ 21 Nov 25 23:49 UTC │
	│ addons  │ enable headlamp -p addons-266876 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-266876        │ jenkins │ v1.37.0 │ 21 Nov 25 23:49 UTC │ 21 Nov 25 23:49 UTC │
	│ addons  │ addons-266876 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-266876        │ jenkins │ v1.37.0 │ 21 Nov 25 23:49 UTC │ 21 Nov 25 23:49 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-266876                                                                                                                                                                                                                                                                                                                                                                                         │ addons-266876        │ jenkins │ v1.37.0 │ 21 Nov 25 23:49 UTC │ 21 Nov 25 23:49 UTC │
	│ addons  │ addons-266876 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-266876        │ jenkins │ v1.37.0 │ 21 Nov 25 23:49 UTC │ 21 Nov 25 23:49 UTC │
	│ ssh     │ addons-266876 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                               │ addons-266876        │ jenkins │ v1.37.0 │ 21 Nov 25 23:49 UTC │                     │
	│ addons  │ addons-266876 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-266876        │ jenkins │ v1.37.0 │ 21 Nov 25 23:49 UTC │ 21 Nov 25 23:49 UTC │
	│ ip      │ addons-266876 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-266876        │ jenkins │ v1.37.0 │ 21 Nov 25 23:49 UTC │ 21 Nov 25 23:49 UTC │
	│ addons  │ addons-266876 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-266876        │ jenkins │ v1.37.0 │ 21 Nov 25 23:49 UTC │ 21 Nov 25 23:49 UTC │
	│ addons  │ addons-266876 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-266876        │ jenkins │ v1.37.0 │ 21 Nov 25 23:49 UTC │ 21 Nov 25 23:49 UTC │
	│ addons  │ addons-266876 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                               │ addons-266876        │ jenkins │ v1.37.0 │ 21 Nov 25 23:49 UTC │ 21 Nov 25 23:49 UTC │
	│ addons  │ addons-266876 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-266876        │ jenkins │ v1.37.0 │ 21 Nov 25 23:49 UTC │ 21 Nov 25 23:49 UTC │
	│ addons  │ addons-266876 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-266876        │ jenkins │ v1.37.0 │ 21 Nov 25 23:49 UTC │ 21 Nov 25 23:49 UTC │
	│ ip      │ addons-266876 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-266876        │ jenkins │ v1.37.0 │ 21 Nov 25 23:51 UTC │ 21 Nov 25 23:51 UTC │
	│ addons  │ addons-266876 addons disable ingress-dns --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-266876        │ jenkins │ v1.37.0 │ 21 Nov 25 23:51 UTC │ 21 Nov 25 23:51 UTC │
	│ addons  │ addons-266876 addons disable ingress --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-266876        │ jenkins │ v1.37.0 │ 21 Nov 25 23:51 UTC │ 21 Nov 25 23:51 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 23:46:48
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 23:46:48.131095  251263 out.go:360] Setting OutFile to fd 1 ...
	I1121 23:46:48.131340  251263 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:46:48.131350  251263 out.go:374] Setting ErrFile to fd 2...
	I1121 23:46:48.131354  251263 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:46:48.131528  251263 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-244751/.minikube/bin
	I1121 23:46:48.132085  251263 out.go:368] Setting JSON to false
	I1121 23:46:48.132905  251263 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":26936,"bootTime":1763741872,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1121 23:46:48.132973  251263 start.go:143] virtualization: kvm guest
	I1121 23:46:48.134971  251263 out.go:179] * [addons-266876] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1121 23:46:48.136184  251263 notify.go:221] Checking for updates...
	I1121 23:46:48.136230  251263 out.go:179]   - MINIKUBE_LOCATION=21934
	I1121 23:46:48.137505  251263 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 23:46:48.138918  251263 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-244751/kubeconfig
	I1121 23:46:48.140232  251263 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-244751/.minikube
	I1121 23:46:48.141364  251263 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1121 23:46:48.142744  251263 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 23:46:48.144346  251263 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 23:46:48.178112  251263 out.go:179] * Using the kvm2 driver based on user configuration
	I1121 23:46:48.179144  251263 start.go:309] selected driver: kvm2
	I1121 23:46:48.179156  251263 start.go:930] validating driver "kvm2" against <nil>
	I1121 23:46:48.179168  251263 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 23:46:48.179919  251263 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1121 23:46:48.180166  251263 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 23:46:48.180191  251263 cni.go:84] Creating CNI manager for ""
	I1121 23:46:48.180267  251263 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1121 23:46:48.180276  251263 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1121 23:46:48.180323  251263 start.go:353] cluster config:
	{Name:addons-266876 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-266876 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1121 23:46:48.180438  251263 iso.go:125] acquiring lock: {Name:mkc83d3435f1eaa5a92358fc78f85b7d74048deb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 23:46:48.181860  251263 out.go:179] * Starting "addons-266876" primary control-plane node in "addons-266876" cluster
	I1121 23:46:48.182929  251263 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 23:46:48.182959  251263 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21934-244751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1121 23:46:48.182976  251263 cache.go:65] Caching tarball of preloaded images
	I1121 23:46:48.183059  251263 preload.go:238] Found /home/jenkins/minikube-integration/21934-244751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1121 23:46:48.183069  251263 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1121 23:46:48.183354  251263 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/config.json ...
	I1121 23:46:48.183376  251263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/config.json: {Name:mk0295453cd01463fa22b5d6c7388981c204c24d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:46:48.183507  251263 start.go:360] acquireMachinesLock for addons-266876: {Name:mk0193f6f5636a08cc7939c91649c5870e7698fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1121 23:46:48.183552  251263 start.go:364] duration metric: took 33.297µs to acquireMachinesLock for "addons-266876"
	I1121 23:46:48.183570  251263 start.go:93] Provisioning new machine with config: &{Name:addons-266876 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-266876 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1121 23:46:48.183614  251263 start.go:125] createHost starting for "" (driver="kvm2")
	I1121 23:46:48.185254  251263 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1121 23:46:48.185412  251263 start.go:159] libmachine.API.Create for "addons-266876" (driver="kvm2")
	I1121 23:46:48.185441  251263 client.go:173] LocalClient.Create starting
	I1121 23:46:48.185543  251263 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21934-244751/.minikube/certs/ca.pem
	I1121 23:46:48.249364  251263 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21934-244751/.minikube/certs/cert.pem
	I1121 23:46:48.566610  251263 main.go:143] libmachine: creating domain...
	I1121 23:46:48.566636  251263 main.go:143] libmachine: creating network...
	I1121 23:46:48.568191  251263 main.go:143] libmachine: found existing default network
	I1121 23:46:48.568404  251263 main.go:143] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1121 23:46:48.568892  251263 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e90440}
	I1121 23:46:48.569009  251263 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-266876</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1121 23:46:48.575044  251263 main.go:143] libmachine: creating private network mk-addons-266876 192.168.39.0/24...
	I1121 23:46:48.645727  251263 main.go:143] libmachine: private network mk-addons-266876 192.168.39.0/24 created
	I1121 23:46:48.646042  251263 main.go:143] libmachine: <network>
	  <name>mk-addons-266876</name>
	  <uuid>c503bc44-d3ea-47cf-b120-da4593d18380</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:80:0f:c2'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1121 23:46:48.646078  251263 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876 ...
	I1121 23:46:48.646103  251263 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/21934-244751/.minikube/cache/iso/amd64/minikube-v1.37.0-1763503576-21924-amd64.iso
	I1121 23:46:48.646114  251263 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/21934-244751/.minikube
	I1121 23:46:48.646192  251263 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/21934-244751/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21934-244751/.minikube/cache/iso/amd64/minikube-v1.37.0-1763503576-21924-amd64.iso...
	I1121 23:46:48.924945  251263 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa...
	I1121 23:46:48.947251  251263 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/addons-266876.rawdisk...
	I1121 23:46:48.947299  251263 main.go:143] libmachine: Writing magic tar header
	I1121 23:46:48.947321  251263 main.go:143] libmachine: Writing SSH key tar header
	I1121 23:46:48.947404  251263 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876 ...
	I1121 23:46:48.947463  251263 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876
	I1121 23:46:48.947488  251263 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876 (perms=drwx------)
	I1121 23:46:48.947500  251263 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21934-244751/.minikube/machines
	I1121 23:46:48.947510  251263 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21934-244751/.minikube/machines (perms=drwxr-xr-x)
	I1121 23:46:48.947521  251263 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21934-244751/.minikube
	I1121 23:46:48.947528  251263 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21934-244751/.minikube (perms=drwxr-xr-x)
	I1121 23:46:48.947540  251263 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21934-244751
	I1121 23:46:48.947549  251263 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21934-244751 (perms=drwxrwxr-x)
	I1121 23:46:48.947562  251263 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1121 23:46:48.947572  251263 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1121 23:46:48.947579  251263 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1121 23:46:48.947589  251263 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1121 23:46:48.947600  251263 main.go:143] libmachine: checking permissions on dir: /home
	I1121 23:46:48.947606  251263 main.go:143] libmachine: skipping /home - not owner
	I1121 23:46:48.947613  251263 main.go:143] libmachine: defining domain...
	I1121 23:46:48.949155  251263 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-266876</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/addons-266876.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-266876'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1121 23:46:48.954504  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:cb:01:39 in network default
	I1121 23:46:48.955203  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:46:48.955226  251263 main.go:143] libmachine: starting domain...
	I1121 23:46:48.955230  251263 main.go:143] libmachine: ensuring networks are active...
	I1121 23:46:48.956075  251263 main.go:143] libmachine: Ensuring network default is active
	I1121 23:46:48.956468  251263 main.go:143] libmachine: Ensuring network mk-addons-266876 is active
	I1121 23:46:48.957054  251263 main.go:143] libmachine: getting domain XML...
	I1121 23:46:48.958124  251263 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-266876</name>
	  <uuid>c4a95d5c-2715-4bec-8bc2-a50909bf4217</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/addons-266876.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:ab:5a:31'/>
	      <source network='mk-addons-266876'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:cb:01:39'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1121 23:46:50.230732  251263 main.go:143] libmachine: waiting for domain to start...
	I1121 23:46:50.232398  251263 main.go:143] libmachine: domain is now running
	I1121 23:46:50.232423  251263 main.go:143] libmachine: waiting for IP...
	I1121 23:46:50.233366  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:46:50.234245  251263 main.go:143] libmachine: no network interface addresses found for domain addons-266876 (source=lease)
	I1121 23:46:50.234266  251263 main.go:143] libmachine: trying to list again with source=arp
	I1121 23:46:50.234594  251263 main.go:143] libmachine: unable to find current IP address of domain addons-266876 in network mk-addons-266876 (interfaces detected: [])
	I1121 23:46:50.234654  251263 retry.go:31] will retry after 291.794239ms: waiting for domain to come up
	I1121 23:46:50.528283  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:46:50.528971  251263 main.go:143] libmachine: no network interface addresses found for domain addons-266876 (source=lease)
	I1121 23:46:50.528987  251263 main.go:143] libmachine: trying to list again with source=arp
	I1121 23:46:50.529342  251263 main.go:143] libmachine: unable to find current IP address of domain addons-266876 in network mk-addons-266876 (interfaces detected: [])
	I1121 23:46:50.529380  251263 retry.go:31] will retry after 351.305248ms: waiting for domain to come up
	I1121 23:46:50.882166  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:46:50.883099  251263 main.go:143] libmachine: no network interface addresses found for domain addons-266876 (source=lease)
	I1121 23:46:50.883122  251263 main.go:143] libmachine: trying to list again with source=arp
	I1121 23:46:50.883485  251263 main.go:143] libmachine: unable to find current IP address of domain addons-266876 in network mk-addons-266876 (interfaces detected: [])
	I1121 23:46:50.883531  251263 retry.go:31] will retry after 364.129033ms: waiting for domain to come up
	I1121 23:46:51.249389  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:46:51.250192  251263 main.go:143] libmachine: no network interface addresses found for domain addons-266876 (source=lease)
	I1121 23:46:51.250210  251263 main.go:143] libmachine: trying to list again with source=arp
	I1121 23:46:51.250511  251263 main.go:143] libmachine: unable to find current IP address of domain addons-266876 in network mk-addons-266876 (interfaces detected: [])
	I1121 23:46:51.250562  251263 retry.go:31] will retry after 385.747401ms: waiting for domain to come up
	I1121 23:46:51.638320  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:46:51.639301  251263 main.go:143] libmachine: no network interface addresses found for domain addons-266876 (source=lease)
	I1121 23:46:51.639319  251263 main.go:143] libmachine: trying to list again with source=arp
	I1121 23:46:51.639704  251263 main.go:143] libmachine: unable to find current IP address of domain addons-266876 in network mk-addons-266876 (interfaces detected: [])
	I1121 23:46:51.639759  251263 retry.go:31] will retry after 745.315642ms: waiting for domain to come up
	I1121 23:46:52.386579  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:46:52.387430  251263 main.go:143] libmachine: no network interface addresses found for domain addons-266876 (source=lease)
	I1121 23:46:52.387444  251263 main.go:143] libmachine: trying to list again with source=arp
	I1121 23:46:52.387845  251263 main.go:143] libmachine: unable to find current IP address of domain addons-266876 in network mk-addons-266876 (interfaces detected: [])
	I1121 23:46:52.387891  251263 retry.go:31] will retry after 692.465755ms: waiting for domain to come up
	I1121 23:46:53.081995  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:46:53.082882  251263 main.go:143] libmachine: no network interface addresses found for domain addons-266876 (source=lease)
	I1121 23:46:53.082899  251263 main.go:143] libmachine: trying to list again with source=arp
	I1121 23:46:53.083254  251263 main.go:143] libmachine: unable to find current IP address of domain addons-266876 in network mk-addons-266876 (interfaces detected: [])
	I1121 23:46:53.083289  251263 retry.go:31] will retry after 879.261574ms: waiting for domain to come up
	I1121 23:46:53.964041  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:46:53.964752  251263 main.go:143] libmachine: no network interface addresses found for domain addons-266876 (source=lease)
	I1121 23:46:53.964779  251263 main.go:143] libmachine: trying to list again with source=arp
	I1121 23:46:53.965086  251263 main.go:143] libmachine: unable to find current IP address of domain addons-266876 in network mk-addons-266876 (interfaces detected: [])
	I1121 23:46:53.965141  251263 retry.go:31] will retry after 1.461085566s: waiting for domain to come up
	I1121 23:46:55.428870  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:46:55.429589  251263 main.go:143] libmachine: no network interface addresses found for domain addons-266876 (source=lease)
	I1121 23:46:55.429605  251263 main.go:143] libmachine: trying to list again with source=arp
	I1121 23:46:55.429939  251263 main.go:143] libmachine: unable to find current IP address of domain addons-266876 in network mk-addons-266876 (interfaces detected: [])
	I1121 23:46:55.429981  251263 retry.go:31] will retry after 1.78072773s: waiting for domain to come up
	I1121 23:46:57.213143  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:46:57.213941  251263 main.go:143] libmachine: no network interface addresses found for domain addons-266876 (source=lease)
	I1121 23:46:57.213961  251263 main.go:143] libmachine: trying to list again with source=arp
	I1121 23:46:57.214320  251263 main.go:143] libmachine: unable to find current IP address of domain addons-266876 in network mk-addons-266876 (interfaces detected: [])
	I1121 23:46:57.214355  251263 retry.go:31] will retry after 1.504173315s: waiting for domain to come up
	I1121 23:46:58.719849  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:46:58.720746  251263 main.go:143] libmachine: no network interface addresses found for domain addons-266876 (source=lease)
	I1121 23:46:58.720770  251263 main.go:143] libmachine: trying to list again with source=arp
	I1121 23:46:58.721137  251263 main.go:143] libmachine: unable to find current IP address of domain addons-266876 in network mk-addons-266876 (interfaces detected: [])
	I1121 23:46:58.721173  251263 retry.go:31] will retry after 2.875642747s: waiting for domain to come up
	I1121 23:47:01.600296  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:01.600945  251263 main.go:143] libmachine: no network interface addresses found for domain addons-266876 (source=lease)
	I1121 23:47:01.600961  251263 main.go:143] libmachine: trying to list again with source=arp
	I1121 23:47:01.601274  251263 main.go:143] libmachine: unable to find current IP address of domain addons-266876 in network mk-addons-266876 (interfaces detected: [])
	I1121 23:47:01.601321  251263 retry.go:31] will retry after 3.623260763s: waiting for domain to come up
	I1121 23:47:05.227711  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:05.228458  251263 main.go:143] libmachine: domain addons-266876 has current primary IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:05.228475  251263 main.go:143] libmachine: found domain IP: 192.168.39.50
	I1121 23:47:05.228486  251263 main.go:143] libmachine: reserving static IP address...
	I1121 23:47:05.229043  251263 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-266876", mac: "52:54:00:ab:5a:31", ip: "192.168.39.50"} in network mk-addons-266876
	I1121 23:47:05.530130  251263 main.go:143] libmachine: reserved static IP address 192.168.39.50 for domain addons-266876
	I1121 23:47:05.530160  251263 main.go:143] libmachine: waiting for SSH...
	I1121 23:47:05.530169  251263 main.go:143] libmachine: Getting to WaitForSSH function...
	I1121 23:47:05.533988  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:05.534529  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:05.534565  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:05.534795  251263 main.go:143] libmachine: Using SSH client type: native
	I1121 23:47:05.535088  251263 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I1121 23:47:05.535104  251263 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1121 23:47:05.657550  251263 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1121 23:47:05.657963  251263 main.go:143] libmachine: domain creation complete
	I1121 23:47:05.659772  251263 machine.go:94] provisionDockerMachine start ...
	I1121 23:47:05.662740  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:05.663237  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:05.663263  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:05.663525  251263 main.go:143] libmachine: Using SSH client type: native
	I1121 23:47:05.663805  251263 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I1121 23:47:05.663820  251263 main.go:143] libmachine: About to run SSH command:
	hostname
	I1121 23:47:05.773778  251263 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1121 23:47:05.773809  251263 buildroot.go:166] provisioning hostname "addons-266876"
	I1121 23:47:05.777397  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:05.777855  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:05.777881  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:05.778090  251263 main.go:143] libmachine: Using SSH client type: native
	I1121 23:47:05.778347  251263 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I1121 23:47:05.778362  251263 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-266876 && echo "addons-266876" | sudo tee /etc/hostname
	I1121 23:47:05.904549  251263 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-266876
	
	I1121 23:47:05.907947  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:05.908399  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:05.908428  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:05.908637  251263 main.go:143] libmachine: Using SSH client type: native
	I1121 23:47:05.908909  251263 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I1121 23:47:05.908934  251263 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-266876' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-266876/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-266876' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1121 23:47:06.027505  251263 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1121 23:47:06.027542  251263 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21934-244751/.minikube CaCertPath:/home/jenkins/minikube-integration/21934-244751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21934-244751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21934-244751/.minikube}
	I1121 23:47:06.027606  251263 buildroot.go:174] setting up certificates
	I1121 23:47:06.027620  251263 provision.go:84] configureAuth start
	I1121 23:47:06.030823  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:06.031234  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:06.031255  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:06.033405  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:06.033742  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:06.033761  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:06.033873  251263 provision.go:143] copyHostCerts
	I1121 23:47:06.033958  251263 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-244751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21934-244751/.minikube/key.pem (1679 bytes)
	I1121 23:47:06.034087  251263 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-244751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21934-244751/.minikube/ca.pem (1078 bytes)
	I1121 23:47:06.034147  251263 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-244751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21934-244751/.minikube/cert.pem (1123 bytes)
	I1121 23:47:06.034206  251263 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21934-244751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21934-244751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21934-244751/.minikube/certs/ca-key.pem org=jenkins.addons-266876 san=[127.0.0.1 192.168.39.50 addons-266876 localhost minikube]
	I1121 23:47:06.088178  251263 provision.go:177] copyRemoteCerts
	I1121 23:47:06.088255  251263 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1121 23:47:06.090836  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:06.091229  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:06.091259  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:06.091419  251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
	I1121 23:47:06.177697  251263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1121 23:47:06.208945  251263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1121 23:47:06.240002  251263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1121 23:47:06.271424  251263 provision.go:87] duration metric: took 243.786645ms to configureAuth
	I1121 23:47:06.271463  251263 buildroot.go:189] setting minikube options for container-runtime
	I1121 23:47:06.271718  251263 config.go:182] Loaded profile config "addons-266876": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:47:06.275170  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:06.275691  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:06.275730  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:06.276021  251263 main.go:143] libmachine: Using SSH client type: native
	I1121 23:47:06.276275  251263 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I1121 23:47:06.276292  251263 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1121 23:47:06.522993  251263 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1121 23:47:06.523024  251263 machine.go:97] duration metric: took 863.230308ms to provisionDockerMachine
	I1121 23:47:06.523034  251263 client.go:176] duration metric: took 18.337586387s to LocalClient.Create
	I1121 23:47:06.523056  251263 start.go:167] duration metric: took 18.337642424s to libmachine.API.Create "addons-266876"
	I1121 23:47:06.523067  251263 start.go:293] postStartSetup for "addons-266876" (driver="kvm2")
	I1121 23:47:06.523080  251263 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1121 23:47:06.523174  251263 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1121 23:47:06.526182  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:06.526662  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:06.526701  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:06.526857  251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
	I1121 23:47:06.616570  251263 ssh_runner.go:195] Run: cat /etc/os-release
	I1121 23:47:06.622182  251263 info.go:137] Remote host: Buildroot 2025.02
	I1121 23:47:06.622217  251263 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-244751/.minikube/addons for local assets ...
	I1121 23:47:06.622288  251263 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-244751/.minikube/files for local assets ...
	I1121 23:47:06.622311  251263 start.go:296] duration metric: took 99.238343ms for postStartSetup
	I1121 23:47:06.625431  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:06.626043  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:06.626079  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:06.626664  251263 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/config.json ...
	I1121 23:47:06.626937  251263 start.go:128] duration metric: took 18.44331085s to createHost
	I1121 23:47:06.629842  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:06.630374  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:06.630404  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:06.630671  251263 main.go:143] libmachine: Using SSH client type: native
	I1121 23:47:06.630883  251263 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I1121 23:47:06.630893  251263 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1121 23:47:06.742838  251263 main.go:143] libmachine: SSH cmd err, output: <nil>: 1763768826.701122136
	
	I1121 23:47:06.742869  251263 fix.go:216] guest clock: 1763768826.701122136
	I1121 23:47:06.742878  251263 fix.go:229] Guest: 2025-11-21 23:47:06.701122136 +0000 UTC Remote: 2025-11-21 23:47:06.626948375 +0000 UTC m=+18.545515405 (delta=74.173761ms)
	I1121 23:47:06.742897  251263 fix.go:200] guest clock delta is within tolerance: 74.173761ms
	I1121 23:47:06.742902  251263 start.go:83] releasing machines lock for "addons-266876", held for 18.559341059s
	I1121 23:47:06.745883  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:06.746295  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:06.746321  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:06.746833  251263 ssh_runner.go:195] Run: cat /version.json
	I1121 23:47:06.746947  251263 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1121 23:47:06.750243  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:06.750247  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:06.750776  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:06.750809  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:06.750823  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:06.750856  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:06.751031  251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
	I1121 23:47:06.751199  251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
	I1121 23:47:06.830906  251263 ssh_runner.go:195] Run: systemctl --version
	I1121 23:47:06.862977  251263 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1121 23:47:07.024839  251263 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1121 23:47:07.032647  251263 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1121 23:47:07.032771  251263 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1121 23:47:07.054527  251263 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1121 23:47:07.054564  251263 start.go:496] detecting cgroup driver to use...
	I1121 23:47:07.054645  251263 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1121 23:47:07.075688  251263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1121 23:47:07.094661  251263 docker.go:218] disabling cri-docker service (if available) ...
	I1121 23:47:07.094747  251263 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1121 23:47:07.112602  251263 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1121 23:47:07.129177  251263 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1121 23:47:07.274890  251263 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1121 23:47:07.492757  251263 docker.go:234] disabling docker service ...
	I1121 23:47:07.492831  251263 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1121 23:47:07.510021  251263 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1121 23:47:07.525620  251263 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1121 23:47:07.675935  251263 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1121 23:47:07.820400  251263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1121 23:47:07.837622  251263 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1121 23:47:07.861864  251263 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1121 23:47:07.861942  251263 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 23:47:07.875198  251263 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1121 23:47:07.875282  251263 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 23:47:07.889198  251263 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 23:47:07.902595  251263 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 23:47:07.915879  251263 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1121 23:47:07.929954  251263 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 23:47:07.943664  251263 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 23:47:07.965719  251263 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 23:47:07.978868  251263 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1121 23:47:07.991074  251263 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1121 23:47:07.991144  251263 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1121 23:47:08.015804  251263 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1121 23:47:08.029594  251263 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 23:47:08.172544  251263 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1121 23:47:08.286465  251263 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1121 23:47:08.286546  251263 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1121 23:47:08.292422  251263 start.go:564] Will wait 60s for crictl version
	I1121 23:47:08.292523  251263 ssh_runner.go:195] Run: which crictl
	I1121 23:47:08.297252  251263 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1121 23:47:08.333825  251263 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1121 23:47:08.333924  251263 ssh_runner.go:195] Run: crio --version
	I1121 23:47:08.364777  251263 ssh_runner.go:195] Run: crio --version
	I1121 23:47:08.397593  251263 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1121 23:47:08.401817  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:08.402315  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:08.402343  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:08.402614  251263 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1121 23:47:08.408058  251263 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 23:47:08.427560  251263 kubeadm.go:884] updating cluster {Name:addons-266876 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-266876 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.50 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1121 23:47:08.427708  251263 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 23:47:08.427752  251263 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 23:47:08.466046  251263 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1121 23:47:08.466131  251263 ssh_runner.go:195] Run: which lz4
	I1121 23:47:08.471268  251263 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1121 23:47:08.476699  251263 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1121 23:47:08.476733  251263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1121 23:47:10.046904  251263 crio.go:462] duration metric: took 1.575665951s to copy over tarball
	I1121 23:47:10.046997  251263 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1121 23:47:11.663077  251263 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.616046572s)
	I1121 23:47:11.663118  251263 crio.go:469] duration metric: took 1.616181048s to extract the tarball
	I1121 23:47:11.663129  251263 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1121 23:47:11.705893  251263 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 23:47:11.746467  251263 crio.go:514] all images are preloaded for cri-o runtime.
	I1121 23:47:11.746493  251263 cache_images.go:86] Images are preloaded, skipping loading
	I1121 23:47:11.746502  251263 kubeadm.go:935] updating node { 192.168.39.50 8443 v1.34.1 crio true true} ...
	I1121 23:47:11.746609  251263 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-266876 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.50
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-266876 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1121 23:47:11.746698  251263 ssh_runner.go:195] Run: crio config
	I1121 23:47:11.795708  251263 cni.go:84] Creating CNI manager for ""
	I1121 23:47:11.795739  251263 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1121 23:47:11.795759  251263 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1121 23:47:11.795781  251263 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.50 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-266876 NodeName:addons-266876 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.50"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.50 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1121 23:47:11.795901  251263 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.50
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-266876"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.50"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.50"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1121 23:47:11.795977  251263 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1121 23:47:11.808516  251263 binaries.go:51] Found k8s binaries, skipping transfer
	I1121 23:47:11.808581  251263 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1121 23:47:11.820622  251263 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1121 23:47:11.842831  251263 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1121 23:47:11.864556  251263 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1121 23:47:11.887018  251263 ssh_runner.go:195] Run: grep 192.168.39.50	control-plane.minikube.internal$ /etc/hosts
	I1121 23:47:11.891743  251263 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.50	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 23:47:11.907140  251263 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 23:47:12.050500  251263 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 23:47:12.084445  251263 certs.go:69] Setting up /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876 for IP: 192.168.39.50
	I1121 23:47:12.084477  251263 certs.go:195] generating shared ca certs ...
	I1121 23:47:12.084503  251263 certs.go:227] acquiring lock for ca certs: {Name:mk43fa762c6315605485300e07cf83f8f357f8dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:47:12.084733  251263 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21934-244751/.minikube/ca.key
	I1121 23:47:12.219080  251263 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-244751/.minikube/ca.crt ...
	I1121 23:47:12.219114  251263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-244751/.minikube/ca.crt: {Name:mk4ab860b5f00eeacc7d5a064e6b8682b8350cc0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:47:12.219328  251263 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-244751/.minikube/ca.key ...
	I1121 23:47:12.219350  251263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-244751/.minikube/ca.key: {Name:mkd33a6a072a0fb7cb39783adfcb9f792da25f35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:47:12.219466  251263 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21934-244751/.minikube/proxy-client-ca.key
	I1121 23:47:12.275894  251263 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-244751/.minikube/proxy-client-ca.crt ...
	I1121 23:47:12.275930  251263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-244751/.minikube/proxy-client-ca.crt: {Name:mk4874a4ae2a76e1a44a3b81a6402bcd1f4b9663 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:47:12.276126  251263 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-244751/.minikube/proxy-client-ca.key ...
	I1121 23:47:12.276145  251263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-244751/.minikube/proxy-client-ca.key: {Name:mk1d8c1db5a8f9f2ab09a6bc1211706c413d6bdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:47:12.276291  251263 certs.go:257] generating profile certs ...
	I1121 23:47:12.276376  251263 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/client.key
	I1121 23:47:12.276402  251263 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/client.crt with IP's: []
	I1121 23:47:12.405508  251263 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/client.crt ...
	I1121 23:47:12.405541  251263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/client.crt: {Name:mkcc0d2bdbfeba71ea1f4e63e41e1151d9d382ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:47:12.405791  251263 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/client.key ...
	I1121 23:47:12.405812  251263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/client.key: {Name:mk1d82213fc29dcec5419cdd18c321f7613a56e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:47:12.405953  251263 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/apiserver.key.8d7367ca
	I1121 23:47:12.405982  251263 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/apiserver.crt.8d7367ca with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.50]
	I1121 23:47:12.443135  251263 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/apiserver.crt.8d7367ca ...
	I1121 23:47:12.443162  251263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/apiserver.crt.8d7367ca: {Name:mk318161f2384c8556874dd6e6e5fc8eee5c9cdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:47:12.443363  251263 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/apiserver.key.8d7367ca ...
	I1121 23:47:12.443385  251263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/apiserver.key.8d7367ca: {Name:mke2fa439b03069f58550af68f202fe26e9c97ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:47:12.443489  251263 certs.go:382] copying /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/apiserver.crt.8d7367ca -> /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/apiserver.crt
	I1121 23:47:12.443595  251263 certs.go:386] copying /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/apiserver.key.8d7367ca -> /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/apiserver.key
	I1121 23:47:12.443670  251263 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/proxy-client.key
	I1121 23:47:12.443705  251263 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/proxy-client.crt with IP's: []
	I1121 23:47:12.603488  251263 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/proxy-client.crt ...
	I1121 23:47:12.603520  251263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/proxy-client.crt: {Name:mk795b280bcd9c59cf78ec03ece9d4b0753eaaa6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:47:12.603755  251263 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/proxy-client.key ...
	I1121 23:47:12.603779  251263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/proxy-client.key: {Name:mkfe4eecc4523b56c0d41272318c6e77ecb4dd52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:47:12.604032  251263 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-244751/.minikube/certs/ca-key.pem (1675 bytes)
	I1121 23:47:12.604112  251263 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-244751/.minikube/certs/ca.pem (1078 bytes)
	I1121 23:47:12.604152  251263 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-244751/.minikube/certs/cert.pem (1123 bytes)
	I1121 23:47:12.604194  251263 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-244751/.minikube/certs/key.pem (1679 bytes)
	I1121 23:47:12.604861  251263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1121 23:47:12.637531  251263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1121 23:47:12.669272  251263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1121 23:47:12.700033  251263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1121 23:47:12.730398  251263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1121 23:47:12.766760  251263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1121 23:47:12.814595  251263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1121 23:47:12.848615  251263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1121 23:47:12.879920  251263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1121 23:47:12.912022  251263 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1121 23:47:12.933857  251263 ssh_runner.go:195] Run: openssl version
	I1121 23:47:12.940506  251263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1121 23:47:12.953948  251263 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1121 23:47:12.959503  251263 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 23:47 /usr/share/ca-certificates/minikubeCA.pem
	I1121 23:47:12.959560  251263 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1121 23:47:12.967627  251263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1121 23:47:12.981398  251263 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1121 23:47:12.986879  251263 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1121 23:47:12.986957  251263 kubeadm.go:401] StartCluster: {Name:addons-266876 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-266876 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.50 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 23:47:12.987064  251263 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 23:47:12.987158  251263 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 23:47:13.025633  251263 cri.go:89] found id: ""
	I1121 23:47:13.025741  251263 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1121 23:47:13.038755  251263 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1121 23:47:13.052370  251263 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1121 23:47:13.065036  251263 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1121 23:47:13.065062  251263 kubeadm.go:158] found existing configuration files:
	
	I1121 23:47:13.065139  251263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1121 23:47:13.077032  251263 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1121 23:47:13.077097  251263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1121 23:47:13.090073  251263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1121 23:47:13.101398  251263 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1121 23:47:13.101465  251263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1121 23:47:13.114396  251263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1121 23:47:13.126235  251263 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1121 23:47:13.126304  251263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1121 23:47:13.139694  251263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1121 23:47:13.151819  251263 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1121 23:47:13.151882  251263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1121 23:47:13.164512  251263 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1121 23:47:13.226756  251263 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1121 23:47:13.226832  251263 kubeadm.go:319] [preflight] Running pre-flight checks
	I1121 23:47:13.345339  251263 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1121 23:47:13.345491  251263 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1121 23:47:13.345647  251263 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1121 23:47:13.359341  251263 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1121 23:47:13.436841  251263 out.go:252]   - Generating certificates and keys ...
	I1121 23:47:13.437031  251263 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1121 23:47:13.437171  251263 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1121 23:47:13.558105  251263 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1121 23:47:13.651102  251263 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1121 23:47:13.902476  251263 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1121 23:47:14.134826  251263 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1121 23:47:14.345459  251263 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1121 23:47:14.345645  251263 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-266876 localhost] and IPs [192.168.39.50 127.0.0.1 ::1]
	I1121 23:47:14.583497  251263 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1121 23:47:14.583717  251263 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-266876 localhost] and IPs [192.168.39.50 127.0.0.1 ::1]
	I1121 23:47:14.931062  251263 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1121 23:47:15.434495  251263 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1121 23:47:15.838983  251263 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1121 23:47:15.839096  251263 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1121 23:47:15.963541  251263 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1121 23:47:16.269311  251263 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1121 23:47:16.929016  251263 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1121 23:47:17.056928  251263 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1121 23:47:17.384976  251263 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1121 23:47:17.385309  251263 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1121 23:47:17.387510  251263 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1121 23:47:17.389626  251263 out.go:252]   - Booting up control plane ...
	I1121 23:47:17.389730  251263 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1121 23:47:17.389802  251263 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1121 23:47:17.389859  251263 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1121 23:47:17.408245  251263 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1121 23:47:17.408393  251263 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1121 23:47:17.416098  251263 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1121 23:47:17.416463  251263 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1121 23:47:17.416528  251263 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1121 23:47:17.572061  251263 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1121 23:47:17.572273  251263 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1121 23:47:18.575810  251263 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.003449114s
	I1121 23:47:18.581453  251263 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1121 23:47:18.581592  251263 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.50:8443/livez
	I1121 23:47:18.581745  251263 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1121 23:47:18.581872  251263 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1121 23:47:21.444953  251263 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.865438426s
	I1121 23:47:22.473854  251263 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.895647364s
	I1121 23:47:24.581213  251263 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.003558147s
	I1121 23:47:24.600634  251263 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1121 23:47:24.621062  251263 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1121 23:47:24.638002  251263 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1121 23:47:24.638263  251263 kubeadm.go:319] [mark-control-plane] Marking the node addons-266876 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1121 23:47:24.652039  251263 kubeadm.go:319] [bootstrap-token] Using token: grn95n.s74ahx9w73uu3ca1
	I1121 23:47:24.653732  251263 out.go:252]   - Configuring RBAC rules ...
	I1121 23:47:24.653880  251263 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1121 23:47:24.659155  251263 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1121 23:47:24.672314  251263 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1121 23:47:24.680496  251263 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1121 23:47:24.684483  251263 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1121 23:47:24.688905  251263 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1121 23:47:24.990519  251263 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1121 23:47:25.446692  251263 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1121 23:47:25.987142  251263 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1121 23:47:25.988495  251263 kubeadm.go:319] 
	I1121 23:47:25.988586  251263 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1121 23:47:25.988628  251263 kubeadm.go:319] 
	I1121 23:47:25.988755  251263 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1121 23:47:25.988774  251263 kubeadm.go:319] 
	I1121 23:47:25.988799  251263 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1121 23:47:25.988879  251263 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1121 23:47:25.988970  251263 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1121 23:47:25.988990  251263 kubeadm.go:319] 
	I1121 23:47:25.989051  251263 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1121 23:47:25.989061  251263 kubeadm.go:319] 
	I1121 23:47:25.989146  251263 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1121 23:47:25.989158  251263 kubeadm.go:319] 
	I1121 23:47:25.989248  251263 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1121 23:47:25.989366  251263 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1121 23:47:25.989475  251263 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1121 23:47:25.989488  251263 kubeadm.go:319] 
	I1121 23:47:25.989602  251263 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1121 23:47:25.989728  251263 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1121 23:47:25.989738  251263 kubeadm.go:319] 
	I1121 23:47:25.989856  251263 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token grn95n.s74ahx9w73uu3ca1 \
	I1121 23:47:25.990007  251263 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7035eabdb6dc9c299f99d6120e0649f8a13de0412ab5d63e88dba6debc1b302c \
	I1121 23:47:25.990049  251263 kubeadm.go:319] 	--control-plane 
	I1121 23:47:25.990057  251263 kubeadm.go:319] 
	I1121 23:47:25.990176  251263 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1121 23:47:25.990186  251263 kubeadm.go:319] 
	I1121 23:47:25.990300  251263 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token grn95n.s74ahx9w73uu3ca1 \
	I1121 23:47:25.990438  251263 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7035eabdb6dc9c299f99d6120e0649f8a13de0412ab5d63e88dba6debc1b302c 
	I1121 23:47:25.992560  251263 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1121 23:47:25.992602  251263 cni.go:84] Creating CNI manager for ""
	I1121 23:47:25.992623  251263 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1121 23:47:25.994543  251263 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1121 23:47:25.996106  251263 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1121 23:47:26.010555  251263 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1121 23:47:26.033834  251263 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1121 23:47:26.033972  251263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 23:47:26.033980  251263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-266876 minikube.k8s.io/updated_at=2025_11_21T23_47_26_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785 minikube.k8s.io/name=addons-266876 minikube.k8s.io/primary=true
	I1121 23:47:26.084057  251263 ops.go:34] apiserver oom_adj: -16
	I1121 23:47:26.203325  251263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 23:47:26.704291  251263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 23:47:27.204057  251263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 23:47:27.704402  251263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 23:47:28.204383  251263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 23:47:28.704103  251263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 23:47:29.204400  251263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 23:47:29.704060  251263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 23:47:30.204340  251263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 23:47:30.314187  251263 kubeadm.go:1114] duration metric: took 4.280316282s to wait for elevateKubeSystemPrivileges
	I1121 23:47:30.314239  251263 kubeadm.go:403] duration metric: took 17.327291456s to StartCluster
	I1121 23:47:30.314270  251263 settings.go:142] acquiring lock: {Name:mkd124ec98418d6d2386a8f1a0e2e5ff6f0f99d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:47:30.314449  251263 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21934-244751/kubeconfig
	I1121 23:47:30.314952  251263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-244751/kubeconfig: {Name:mkbde37dbfe874aace118914fefd91b607e3afff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:47:30.315195  251263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1121 23:47:30.315224  251263 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.50 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1121 23:47:30.315300  251263 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1121 23:47:30.315425  251263 addons.go:70] Setting yakd=true in profile "addons-266876"
	I1121 23:47:30.315450  251263 addons.go:239] Setting addon yakd=true in "addons-266876"
	I1121 23:47:30.315462  251263 addons.go:70] Setting inspektor-gadget=true in profile "addons-266876"
	I1121 23:47:30.315485  251263 host.go:66] Checking if "addons-266876" exists ...
	I1121 23:47:30.315491  251263 addons.go:239] Setting addon inspektor-gadget=true in "addons-266876"
	I1121 23:47:30.315501  251263 addons.go:70] Setting default-storageclass=true in profile "addons-266876"
	I1121 23:47:30.315529  251263 host.go:66] Checking if "addons-266876" exists ...
	I1121 23:47:30.315528  251263 config.go:182] Loaded profile config "addons-266876": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:47:30.315544  251263 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-266876"
	I1121 23:47:30.315569  251263 addons.go:70] Setting cloud-spanner=true in profile "addons-266876"
	I1121 23:47:30.315601  251263 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-266876"
	I1121 23:47:30.315604  251263 addons.go:70] Setting registry-creds=true in profile "addons-266876"
	I1121 23:47:30.315608  251263 addons.go:239] Setting addon cloud-spanner=true in "addons-266876"
	I1121 23:47:30.315620  251263 addons.go:239] Setting addon registry-creds=true in "addons-266876"
	I1121 23:47:30.315642  251263 host.go:66] Checking if "addons-266876" exists ...
	I1121 23:47:30.315644  251263 host.go:66] Checking if "addons-266876" exists ...
	I1121 23:47:30.315644  251263 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-266876"
	I1121 23:47:30.315691  251263 host.go:66] Checking if "addons-266876" exists ...
	I1121 23:47:30.315903  251263 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-266876"
	I1121 23:47:30.315921  251263 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-266876"
	I1121 23:47:30.315947  251263 host.go:66] Checking if "addons-266876" exists ...
	I1121 23:47:30.316235  251263 addons.go:70] Setting ingress=true in profile "addons-266876"
	I1121 23:47:30.316274  251263 addons.go:239] Setting addon ingress=true in "addons-266876"
	I1121 23:47:30.316310  251263 host.go:66] Checking if "addons-266876" exists ...
	I1121 23:47:30.316663  251263 addons.go:70] Setting registry=true in profile "addons-266876"
	I1121 23:47:30.316697  251263 addons.go:239] Setting addon registry=true in "addons-266876"
	I1121 23:47:30.316723  251263 host.go:66] Checking if "addons-266876" exists ...
	I1121 23:47:30.317068  251263 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-266876"
	I1121 23:47:30.317089  251263 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-266876"
	I1121 23:47:30.317115  251263 host.go:66] Checking if "addons-266876" exists ...
	I1121 23:47:30.317160  251263 addons.go:70] Setting gcp-auth=true in profile "addons-266876"
	I1121 23:47:30.315588  251263 addons.go:70] Setting ingress-dns=true in profile "addons-266876"
	I1121 23:47:30.317206  251263 mustload.go:66] Loading cluster: addons-266876
	I1121 23:47:30.317231  251263 addons.go:239] Setting addon ingress-dns=true in "addons-266876"
	I1121 23:47:30.317253  251263 addons.go:70] Setting metrics-server=true in profile "addons-266876"
	I1121 23:47:30.317268  251263 host.go:66] Checking if "addons-266876" exists ...
	I1121 23:47:30.317272  251263 addons.go:239] Setting addon metrics-server=true in "addons-266876"
	I1121 23:47:30.317299  251263 host.go:66] Checking if "addons-266876" exists ...
	I1121 23:47:30.317400  251263 config.go:182] Loaded profile config "addons-266876": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:47:30.317441  251263 addons.go:70] Setting storage-provisioner=true in profile "addons-266876"
	I1121 23:47:30.317460  251263 addons.go:239] Setting addon storage-provisioner=true in "addons-266876"
	I1121 23:47:30.317490  251263 host.go:66] Checking if "addons-266876" exists ...
	I1121 23:47:30.317944  251263 addons.go:70] Setting volcano=true in profile "addons-266876"
	I1121 23:47:30.317973  251263 addons.go:239] Setting addon volcano=true in "addons-266876"
	I1121 23:47:30.318000  251263 host.go:66] Checking if "addons-266876" exists ...
	I1121 23:47:30.318181  251263 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-266876"
	I1121 23:47:30.318207  251263 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-266876"
	I1121 23:47:30.318457  251263 addons.go:70] Setting volumesnapshots=true in profile "addons-266876"
	I1121 23:47:30.318489  251263 addons.go:239] Setting addon volumesnapshots=true in "addons-266876"
	I1121 23:47:30.318514  251263 host.go:66] Checking if "addons-266876" exists ...
	I1121 23:47:30.318636  251263 out.go:179] * Verifying Kubernetes components...
	I1121 23:47:30.321872  251263 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 23:47:30.323979  251263 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1121 23:47:30.324015  251263 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1121 23:47:30.324059  251263 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.43
	I1121 23:47:30.324308  251263 addons.go:239] Setting addon default-storageclass=true in "addons-266876"
	I1121 23:47:30.324852  251263 host.go:66] Checking if "addons-266876" exists ...
	I1121 23:47:30.325430  251263 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1121 23:47:30.325460  251263 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1121 23:47:30.325834  251263 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1121 23:47:30.325536  251263 host.go:66] Checking if "addons-266876" exists ...
	I1121 23:47:30.326179  251263 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1121 23:47:30.326187  251263 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1121 23:47:30.326317  251263 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1121 23:47:30.326336  251263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1121 23:47:30.326936  251263 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1121 23:47:30.326998  251263 out.go:179]   - Using image docker.io/registry:3.0.0
	I1121 23:47:30.326980  251263 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1121 23:47:30.327044  251263 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1121 23:47:30.327543  251263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	W1121 23:47:30.327112  251263 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1121 23:47:30.327823  251263 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1121 23:47:30.327894  251263 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1121 23:47:30.328316  251263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1121 23:47:30.327908  251263 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1121 23:47:30.327937  251263 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1121 23:47:30.328129  251263 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-266876"
	I1121 23:47:30.328994  251263 host.go:66] Checking if "addons-266876" exists ...
	I1121 23:47:30.328605  251263 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1121 23:47:30.328665  251263 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 23:47:30.328694  251263 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1121 23:47:30.330248  251263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1121 23:47:30.329173  251263 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1121 23:47:30.330310  251263 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1121 23:47:30.330603  251263 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1121 23:47:30.330604  251263 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1121 23:47:30.331083  251263 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1121 23:47:30.330604  251263 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1121 23:47:30.330630  251263 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1121 23:47:30.331264  251263 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1121 23:47:30.330646  251263 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1121 23:47:30.330654  251263 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1121 23:47:30.331990  251263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1121 23:47:30.330703  251263 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1121 23:47:30.332116  251263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1121 23:47:30.331545  251263 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 23:47:30.332194  251263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1121 23:47:30.332542  251263 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1121 23:47:30.332882  251263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1121 23:47:30.334102  251263 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1121 23:47:30.334436  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.335240  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.335327  251263 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1121 23:47:30.335355  251263 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1121 23:47:30.336111  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:30.336119  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.336147  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.336581  251263 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1121 23:47:30.336829  251263 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1121 23:47:30.336847  251263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1121 23:47:30.336857  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:30.336898  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.336963  251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
	I1121 23:47:30.337875  251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
	I1121 23:47:30.337944  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:30.337986  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.338791  251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
	I1121 23:47:30.338889  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.339032  251263 out.go:179]   - Using image docker.io/busybox:stable
	I1121 23:47:30.339781  251263 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1121 23:47:30.340483  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:30.340514  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.340666  251263 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1121 23:47:30.340695  251263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1121 23:47:30.340797  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.341117  251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
	I1121 23:47:30.341357  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.342122  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.342189  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:30.342220  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.342778  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:30.342795  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.342811  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.342975  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.343022  251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
	I1121 23:47:30.343206  251263 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1121 23:47:30.343363  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.343504  251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
	I1121 23:47:30.343566  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:30.343596  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.344162  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.344636  251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
	I1121 23:47:30.344648  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:30.344718  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.344930  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:30.344977  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.345068  251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
	I1121 23:47:30.345337  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:30.345379  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.345381  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.345342  251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
	I1121 23:47:30.345569  251263 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1121 23:47:30.345654  251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
	I1121 23:47:30.346248  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:30.346289  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.346396  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:30.346427  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.346508  251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
	I1121 23:47:30.346706  251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
	I1121 23:47:30.346995  251263 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1121 23:47:30.347011  251263 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1121 23:47:30.347328  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.347842  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:30.347873  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.348042  251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
	I1121 23:47:30.348168  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.348658  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:30.348696  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.348924  251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
	I1121 23:47:30.349955  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.350423  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:30.350455  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:30.350644  251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
	W1121 23:47:30.571554  251263 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:53184->192.168.39.50:22: read: connection reset by peer
	I1121 23:47:30.571604  251263 retry.go:31] will retry after 237.893493ms: ssh: handshake failed: read tcp 192.168.39.1:53184->192.168.39.50:22: read: connection reset by peer
	W1121 23:47:30.594670  251263 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:53214->192.168.39.50:22: read: connection reset by peer
	I1121 23:47:30.594718  251263 retry.go:31] will retry after 219.796697ms: ssh: handshake failed: read tcp 192.168.39.1:53214->192.168.39.50:22: read: connection reset by peer
	W1121 23:47:30.648821  251263 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:53232->192.168.39.50:22: read: connection reset by peer
	I1121 23:47:30.648855  251263 retry.go:31] will retry after 280.923937ms: ssh: handshake failed: read tcp 192.168.39.1:53232->192.168.39.50:22: read: connection reset by peer
	I1121 23:47:30.906273  251263 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 23:47:30.906343  251263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1121 23:47:31.303471  251263 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1121 23:47:31.303497  251263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1121 23:47:31.303519  251263 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1121 23:47:31.329075  251263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1121 23:47:31.372362  251263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1121 23:47:31.401245  251263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1121 23:47:31.443583  251263 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1121 23:47:31.443617  251263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1121 23:47:31.448834  251263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1121 23:47:31.496006  251263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1121 23:47:31.498539  251263 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1121 23:47:31.498563  251263 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1121 23:47:31.569835  251263 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1121 23:47:31.569869  251263 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1121 23:47:31.572494  251263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1121 23:47:31.624422  251263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1121 23:47:31.627643  251263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 23:47:31.900562  251263 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1121 23:47:31.900602  251263 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1121 23:47:32.010439  251263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1121 23:47:32.024813  251263 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1121 23:47:32.024876  251263 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1121 23:47:32.170850  251263 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1121 23:47:32.170888  251263 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1121 23:47:32.219733  251263 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1121 23:47:32.219791  251263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1121 23:47:32.404951  251263 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1121 23:47:32.404996  251263 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1121 23:47:32.544216  251263 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1121 23:47:32.544253  251263 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1121 23:47:32.578250  251263 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1121 23:47:32.578284  251263 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1121 23:47:32.653254  251263 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1121 23:47:32.653285  251263 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1121 23:47:32.741481  251263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1121 23:47:32.794874  251263 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1121 23:47:32.794909  251263 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1121 23:47:32.881148  251263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1121 23:47:33.067639  251263 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1121 23:47:33.067700  251263 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1121 23:47:33.067715  251263 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1121 23:47:33.067738  251263 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1121 23:47:33.271805  251263 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1121 23:47:33.271834  251263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1121 23:47:33.312325  251263 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1121 23:47:33.312356  251263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1121 23:47:33.436072  251263 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1121 23:47:33.436107  251263 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1121 23:47:33.708500  251263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1121 23:47:33.708927  251263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1121 23:47:34.040431  251263 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1121 23:47:34.040474  251263 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1121 23:47:34.408465  251263 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.502153253s)
	I1121 23:47:34.408519  251263 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.502134143s)
	I1121 23:47:34.408554  251263 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1121 23:47:34.408578  251263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.105046996s)
	I1121 23:47:34.409219  251263 node_ready.go:35] waiting up to 6m0s for node "addons-266876" to be "Ready" ...
	I1121 23:47:34.415213  251263 node_ready.go:49] node "addons-266876" is "Ready"
	I1121 23:47:34.415248  251263 node_ready.go:38] duration metric: took 6.005684ms for node "addons-266876" to be "Ready" ...
	I1121 23:47:34.415268  251263 api_server.go:52] waiting for apiserver process to appear ...
	I1121 23:47:34.415324  251263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 23:47:34.664082  251263 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1121 23:47:34.664113  251263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1121 23:47:34.918427  251263 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-266876" context rescaled to 1 replicas
	I1121 23:47:35.149255  251263 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1121 23:47:35.149293  251263 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1121 23:47:35.732395  251263 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1121 23:47:35.732425  251263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1121 23:47:36.406188  251263 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1121 23:47:36.406216  251263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1121 23:47:36.897571  251263 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1121 23:47:36.897608  251263 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1121 23:47:37.313754  251263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1121 23:47:37.790744  251263 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1121 23:47:37.793928  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:37.794570  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:37.794603  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:37.794806  251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
	I1121 23:47:38.530200  251263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (7.201079248s)
	I1121 23:47:38.530311  251263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.15790373s)
	I1121 23:47:38.530349  251263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.129067228s)
	I1121 23:47:38.530410  251263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (7.081551551s)
	I1121 23:47:38.530485  251263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.034438414s)
	I1121 23:47:38.530531  251263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.958009964s)
	I1121 23:47:38.530576  251263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (6.90611639s)
	I1121 23:47:38.530688  251263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.902998512s)
	W1121 23:47:38.596091  251263 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1121 23:47:38.696471  251263 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1121 23:47:39.049239  251263 addons.go:239] Setting addon gcp-auth=true in "addons-266876"
	I1121 23:47:39.049319  251263 host.go:66] Checking if "addons-266876" exists ...
	I1121 23:47:39.051589  251263 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1121 23:47:39.054431  251263 main.go:143] libmachine: domain addons-266876 has defined MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:39.054905  251263 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ab:5a:31", ip: ""} in network mk-addons-266876: {Iface:virbr1 ExpiryTime:2025-11-22 00:47:04 +0000 UTC Type:0 Mac:52:54:00:ab:5a:31 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-266876 Clientid:01:52:54:00:ab:5a:31}
	I1121 23:47:39.054946  251263 main.go:143] libmachine: domain addons-266876 has defined IP address 192.168.39.50 and MAC address 52:54:00:ab:5a:31 in network mk-addons-266876
	I1121 23:47:39.055124  251263 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/addons-266876/id_rsa Username:docker}
	I1121 23:47:40.911949  251263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.901459816s)
	I1121 23:47:40.912003  251263 addons.go:495] Verifying addon ingress=true in "addons-266876"
	I1121 23:47:40.912027  251263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.170505015s)
	I1121 23:47:40.912060  251263 addons.go:495] Verifying addon registry=true in "addons-266876"
	I1121 23:47:40.912106  251263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.030918863s)
	I1121 23:47:40.912208  251263 addons.go:495] Verifying addon metrics-server=true in "addons-266876"
	I1121 23:47:40.913759  251263 out.go:179] * Verifying ingress addon...
	I1121 23:47:40.913769  251263 out.go:179] * Verifying registry addon...
	I1121 23:47:40.916006  251263 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1121 23:47:40.916028  251263 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1121 23:47:41.040220  251263 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1121 23:47:41.040250  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:41.043403  251263 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1121 23:47:41.043428  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:41.261875  251263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.5533177s)
	W1121 23:47:41.261945  251263 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1121 23:47:41.261983  251263 retry.go:31] will retry after 128.365697ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1121 23:47:41.262010  251263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.553035838s)
	I1121 23:47:41.262077  251263 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (6.846726255s)
	I1121 23:47:41.262115  251263 api_server.go:72] duration metric: took 10.946861397s to wait for apiserver process to appear ...
	I1121 23:47:41.262194  251263 api_server.go:88] waiting for apiserver healthz status ...
	I1121 23:47:41.262220  251263 api_server.go:253] Checking apiserver healthz at https://192.168.39.50:8443/healthz ...
	I1121 23:47:41.263907  251263 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-266876 service yakd-dashboard -n yakd-dashboard
	
	I1121 23:47:41.282742  251263 api_server.go:279] https://192.168.39.50:8443/healthz returned 200:
	ok
	I1121 23:47:41.287497  251263 api_server.go:141] control plane version: v1.34.1
	I1121 23:47:41.287535  251263 api_server.go:131] duration metric: took 25.332513ms to wait for apiserver health ...
	I1121 23:47:41.287548  251263 system_pods.go:43] waiting for kube-system pods to appear ...
	I1121 23:47:41.306603  251263 system_pods.go:59] 16 kube-system pods found
	I1121 23:47:41.306658  251263 system_pods.go:61] "amd-gpu-device-plugin-pd4sx" [88fffae7-a3c2-46ef-a382-867c1f45dd2f] Running
	I1121 23:47:41.306672  251263 system_pods.go:61] "coredns-66bc5c9577-kmf4p" [c2dae9ee-3a8e-4c5c-9880-256aae9475c1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 23:47:41.306696  251263 system_pods.go:61] "coredns-66bc5c9577-tgk67" [ad56ae13-a7c4-44e3-a817-73aa300110b6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 23:47:41.306706  251263 system_pods.go:61] "etcd-addons-266876" [6329b2aa-df0d-4707-8094-52ed6a9b70fa] Running
	I1121 23:47:41.306714  251263 system_pods.go:61] "kube-apiserver-addons-266876" [d6500ed5-1e8a-40e7-8761-ce5d9b817580] Running
	I1121 23:47:41.306720  251263 system_pods.go:61] "kube-controller-manager-addons-266876" [9afdbaac-04de-4ae4-a1a5-ab74382c1ee4] Running
	I1121 23:47:41.306728  251263 system_pods.go:61] "kube-ingress-dns-minikube" [4c8445af-f050-4525-a580-c6cb45567d21] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1121 23:47:41.306737  251263 system_pods.go:61] "kube-proxy-d6jsf" [8c9f1dbf-19b7-4f19-8f33-11b6886f1237] Running
	I1121 23:47:41.306742  251263 system_pods.go:61] "kube-scheduler-addons-266876" [c367cb26-5fd3-4d41-8752-d8b7d6fe6c13] Running
	I1121 23:47:41.306749  251263 system_pods.go:61] "metrics-server-85b7d694d7-tcd7p" [8b9a51ce-61d0-430c-98de-9174d78d47d6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1121 23:47:41.306759  251263 system_pods.go:61] "nvidia-device-plugin-daemonset-6fx49" [4603aca8-96ff-429c-870e-aaa1a7987b07] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1121 23:47:41.306768  251263 system_pods.go:61] "registry-6b586f9694-g5tcd" [5c882c89-9d82-4657-b7a0-d20f145866ab] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1121 23:47:41.306780  251263 system_pods.go:61] "registry-creds-764b6fb674-c6k42" [b80bedd0-b303-4ba0-9c40-f2fd2464333c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1121 23:47:41.306789  251263 system_pods.go:61] "registry-proxy-xjdbm" [9ce7be2d-00fc-42ac-8617-38e2d4ecac77] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1121 23:47:41.306795  251263 system_pods.go:61] "snapshot-controller-7d9fbc56b8-r57wx" [136bb70d-9950-46db-83d9-09b543dc4f72] Pending
	I1121 23:47:41.306803  251263 system_pods.go:61] "storage-provisioner" [2855a3de-b990-447c-b094-274b5becf1da] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 23:47:41.306812  251263 system_pods.go:74] duration metric: took 19.257263ms to wait for pod list to return data ...
	I1121 23:47:41.306823  251263 default_sa.go:34] waiting for default service account to be created ...
	I1121 23:47:41.323263  251263 default_sa.go:45] found service account: "default"
	I1121 23:47:41.323302  251263 default_sa.go:55] duration metric: took 16.457401ms for default service account to be created ...
	I1121 23:47:41.323317  251263 system_pods.go:116] waiting for k8s-apps to be running ...
	I1121 23:47:41.337749  251263 system_pods.go:86] 17 kube-system pods found
	I1121 23:47:41.337783  251263 system_pods.go:89] "amd-gpu-device-plugin-pd4sx" [88fffae7-a3c2-46ef-a382-867c1f45dd2f] Running
	I1121 23:47:41.337791  251263 system_pods.go:89] "coredns-66bc5c9577-kmf4p" [c2dae9ee-3a8e-4c5c-9880-256aae9475c1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 23:47:41.337797  251263 system_pods.go:89] "coredns-66bc5c9577-tgk67" [ad56ae13-a7c4-44e3-a817-73aa300110b6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 23:47:41.337803  251263 system_pods.go:89] "etcd-addons-266876" [6329b2aa-df0d-4707-8094-52ed6a9b70fa] Running
	I1121 23:47:41.337808  251263 system_pods.go:89] "kube-apiserver-addons-266876" [d6500ed5-1e8a-40e7-8761-ce5d9b817580] Running
	I1121 23:47:41.337812  251263 system_pods.go:89] "kube-controller-manager-addons-266876" [9afdbaac-04de-4ae4-a1a5-ab74382c1ee4] Running
	I1121 23:47:41.337817  251263 system_pods.go:89] "kube-ingress-dns-minikube" [4c8445af-f050-4525-a580-c6cb45567d21] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1121 23:47:41.337821  251263 system_pods.go:89] "kube-proxy-d6jsf" [8c9f1dbf-19b7-4f19-8f33-11b6886f1237] Running
	I1121 23:47:41.337826  251263 system_pods.go:89] "kube-scheduler-addons-266876" [c367cb26-5fd3-4d41-8752-d8b7d6fe6c13] Running
	I1121 23:47:41.337831  251263 system_pods.go:89] "metrics-server-85b7d694d7-tcd7p" [8b9a51ce-61d0-430c-98de-9174d78d47d6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1121 23:47:41.337839  251263 system_pods.go:89] "nvidia-device-plugin-daemonset-6fx49" [4603aca8-96ff-429c-870e-aaa1a7987b07] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1121 23:47:41.337844  251263 system_pods.go:89] "registry-6b586f9694-g5tcd" [5c882c89-9d82-4657-b7a0-d20f145866ab] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1121 23:47:41.337849  251263 system_pods.go:89] "registry-creds-764b6fb674-c6k42" [b80bedd0-b303-4ba0-9c40-f2fd2464333c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1121 23:47:41.337854  251263 system_pods.go:89] "registry-proxy-xjdbm" [9ce7be2d-00fc-42ac-8617-38e2d4ecac77] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1121 23:47:41.337876  251263 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gcprx" [38cf49f5-ed6e-4aa5-bdfe-2494e5763f39] Pending
	I1121 23:47:41.337881  251263 system_pods.go:89] "snapshot-controller-7d9fbc56b8-r57wx" [136bb70d-9950-46db-83d9-09b543dc4f72] Pending
	I1121 23:47:41.337885  251263 system_pods.go:89] "storage-provisioner" [2855a3de-b990-447c-b094-274b5becf1da] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 23:47:41.337897  251263 system_pods.go:126] duration metric: took 14.572276ms to wait for k8s-apps to be running ...
	I1121 23:47:41.337909  251263 system_svc.go:44] waiting for kubelet service to be running ....
	I1121 23:47:41.337964  251263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 23:47:41.391055  251263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1121 23:47:41.444001  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:41.452955  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:41.927933  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:41.929997  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:42.455799  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:42.455860  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:42.926969  251263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.613140073s)
	I1121 23:47:42.927027  251263 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-266876"
	I1121 23:47:42.927049  251263 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.875424504s)
	I1121 23:47:42.927114  251263 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.589124511s)
	I1121 23:47:42.927233  251263 system_svc.go:56] duration metric: took 1.589318384s WaitForService to wait for kubelet
	I1121 23:47:42.927248  251263 kubeadm.go:587] duration metric: took 12.611994145s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 23:47:42.927275  251263 node_conditions.go:102] verifying NodePressure condition ...
	I1121 23:47:42.928903  251263 out.go:179] * Verifying csi-hostpath-driver addon...
	I1121 23:47:42.928918  251263 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1121 23:47:42.930225  251263 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1121 23:47:42.930998  251263 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1121 23:47:42.931460  251263 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1121 23:47:42.931483  251263 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1121 23:47:42.948957  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:42.956545  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:42.972599  251263 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1121 23:47:42.972629  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:42.991010  251263 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1121 23:47:42.991043  251263 node_conditions.go:123] node cpu capacity is 2
	I1121 23:47:42.991060  251263 node_conditions.go:105] duration metric: took 63.779822ms to run NodePressure ...
	I1121 23:47:42.991073  251263 start.go:242] waiting for startup goroutines ...
	I1121 23:47:43.000454  251263 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1121 23:47:43.000488  251263 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1121 23:47:43.064083  251263 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1121 23:47:43.064114  251263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1121 23:47:43.143418  251263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1121 23:47:43.424997  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:43.428350  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:43.438981  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:43.744014  251263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.352903636s)
	I1121 23:47:43.926051  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:43.926403  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:43.939557  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:44.470136  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:44.470507  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:44.470583  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:44.610973  251263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.467509011s)
	I1121 23:47:44.612084  251263 addons.go:495] Verifying addon gcp-auth=true in "addons-266876"
	I1121 23:47:44.614664  251263 out.go:179] * Verifying gcp-auth addon...
	I1121 23:47:44.617037  251263 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1121 23:47:44.679516  251263 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1121 23:47:44.679539  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:44.938585  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:44.939917  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:44.945173  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:45.125511  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:45.423184  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:45.424380  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:45.438459  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:45.621893  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:45.929603  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:45.933258  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:45.938917  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:46.123924  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:46.423081  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:46.425799  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:46.437310  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:46.623291  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:46.925943  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:46.926661  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:46.940308  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:47.120567  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:47.421527  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:47.422825  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:47.435356  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:47.622778  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:47.922908  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:47.925722  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:47.937113  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:48.122097  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:48.423467  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:48.423610  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:48.435064  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:48.622264  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:48.926889  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:48.926907  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:48.935809  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:49.124186  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:49.424165  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:49.424235  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:49.436947  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:49.623380  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:49.926485  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:49.926568  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:49.934726  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:50.149039  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:50.426766  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:50.427550  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:50.435800  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:50.623645  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:50.923166  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:50.924899  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:50.937932  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:51.120970  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:51.422946  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:51.423964  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:51.437143  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:51.623848  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:51.924227  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:51.929471  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:51.939629  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:52.261854  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:52.424962  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:52.428597  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:52.436986  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:52.622910  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:52.922271  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:52.924973  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:52.938365  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:53.121701  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:53.425753  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:53.438148  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:53.440564  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:53.709895  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:53.929068  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:53.931342  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:53.938714  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:54.122158  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:54.425360  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:54.428330  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:54.435907  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:54.623125  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:54.926160  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:54.926269  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:54.934959  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:55.123657  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:55.422851  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:55.423292  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:55.436852  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:55.621782  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:56.184531  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:56.185319  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:56.185351  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:56.185436  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:56.422356  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:56.422605  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:56.437477  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:56.621926  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:56.920916  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:56.921374  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:56.935238  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:57.120293  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:57.422033  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:57.424320  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:57.435388  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:57.621432  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:57.920963  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:57.924452  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:57.935839  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:58.121584  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:58.425091  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:58.425156  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:58.435426  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:58.635444  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:58.922739  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:58.923871  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:58.936112  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:59.123863  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:59.426020  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:59.430811  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:59.438808  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:59.623106  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:59.931900  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:59.936038  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:59.937959  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:00.122854  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:00.422993  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:00.424741  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:00.436196  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:00.620554  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:00.921652  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:00.922569  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:00.935087  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:01.123823  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:01.423850  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:01.425512  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:01.434928  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:01.621491  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:01.923505  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:01.924905  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:01.937201  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:02.121624  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:02.423602  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:02.423787  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:02.435107  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:02.620510  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:02.919996  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:02.921258  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:02.934427  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:03.121234  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:03.422602  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:03.422661  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:03.435654  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:03.627887  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:03.923184  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:03.923492  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:03.943565  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:04.122960  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:04.421986  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:04.422381  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:04.435361  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:04.623019  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:04.923848  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:04.925058  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:04.935882  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:05.121708  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:05.421718  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:05.421805  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:05.434879  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:05.622686  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:05.922353  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:05.923753  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:05.936216  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:06.120868  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:06.423712  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:06.423899  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:06.439806  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:06.625663  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:06.922260  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:06.922652  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:06.936062  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:07.121430  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:07.424027  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:07.424073  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:07.435511  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:07.622294  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:07.921125  251263 kapi.go:107] duration metric: took 27.005089483s to wait for kubernetes.io/minikube-addons=registry ...
	I1121 23:48:07.923396  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:07.939621  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:08.121478  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:08.519292  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:08.522400  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:08.626487  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:08.919824  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:08.935099  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:09.123034  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:09.427247  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:09.439663  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:09.630747  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:09.924829  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:09.937762  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:10.126266  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:10.423912  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:10.442758  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:10.829148  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:10.928186  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:10.938788  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:11.126344  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:11.423503  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:11.440161  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:11.628256  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:11.922200  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:12.026774  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:12.122410  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:12.425763  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:12.435748  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:12.620552  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:12.954050  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:12.957856  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:13.126813  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:13.421360  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:13.435025  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:13.629500  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:13.922707  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:13.935410  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:14.123341  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:14.426174  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:14.436803  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:14.622210  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:14.941433  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:14.941557  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:15.122789  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:15.422344  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:15.435838  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:15.620803  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:15.922769  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:15.936263  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:16.123330  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:16.420710  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:16.437443  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:16.622053  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:16.922695  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:16.940782  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:17.241963  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:17.422836  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:17.436564  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:17.623372  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:17.919854  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:17.948897  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:18.124153  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:18.423733  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:18.436717  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:18.622046  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:18.922805  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:18.935793  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:19.122329  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:19.425051  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:19.439118  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:19.619916  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:19.920748  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:19.937662  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:20.128846  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:20.427312  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:20.441072  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:20.627540  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:20.922225  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:20.935498  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:21.125438  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:21.421980  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:21.435607  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:21.622394  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:21.920638  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:21.935580  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:22.121779  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:22.425387  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:22.436106  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:22.622379  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:22.922035  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:22.939454  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:23.123644  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:23.422127  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:23.437099  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:23.621255  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:23.921598  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:23.936278  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:24.121938  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:24.421559  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:24.435263  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:24.621048  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:24.921427  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:24.936154  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:25.128780  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:25.436990  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:25.447989  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:25.627750  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:25.925784  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:25.936653  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:26.125097  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:26.421139  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:26.435288  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:26.621354  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:26.979865  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:26.982130  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:27.121596  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:27.421737  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:27.436413  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:27.622223  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:27.923259  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:27.938238  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:28.122777  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:28.422102  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:28.435098  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:28.624943  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:28.923578  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:28.934884  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:29.123227  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:29.422918  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:29.440055  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:29.621947  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:29.924766  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:29.943765  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:30.125218  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:30.427521  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:30.435473  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:30.622346  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:30.926321  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:30.935211  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:31.125820  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:31.423165  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:31.435981  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:31.624574  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:31.924255  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:31.937572  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:32.123297  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:32.420253  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:32.435092  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:32.620642  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:32.924708  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:32.936867  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:33.122959  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:33.421260  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:33.435115  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:33.622355  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:33.922446  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:33.937891  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:34.121936  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:34.422837  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:34.436876  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:34.621392  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:34.922989  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:34.936968  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:35.121994  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:35.420314  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:35.435229  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:35.620372  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:35.921246  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:35.935379  251263 kapi.go:107] duration metric: took 53.004380156s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1121 23:48:36.121002  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:36.421297  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:36.620475  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:36.920737  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:37.121903  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:37.420740  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:37.621573  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:37.920470  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:38.120871  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:38.419747  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:38.620870  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:38.919569  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:39.121472  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:39.420632  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:39.621914  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:39.919274  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:40.120595  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:40.420718  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:40.621509  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:40.920672  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:41.121166  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:41.422011  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:41.622380  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:41.921196  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:42.120596  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:42.420828  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:42.621388  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:42.921558  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:43.121925  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:43.419853  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:43.622393  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:43.920887  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:44.121285  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:44.420735  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:44.622063  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:44.920303  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:45.123622  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:45.422460  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:45.623240  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:45.938878  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:46.121145  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:46.421462  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:46.621556  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:46.920539  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:47.123242  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:47.434774  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:47.623534  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:47.929223  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:48.125077  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:48.421704  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:48.623369  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:48.922650  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:49.123639  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:49.421456  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:49.624574  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:49.931049  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:50.124348  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:50.420556  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:50.622234  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:50.924025  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:51.124075  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:51.423011  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:51.623295  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:51.920670  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:52.121233  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:52.424341  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:52.621172  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:52.921299  251263 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:53.121769  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:53.420110  251263 kapi.go:107] duration metric: took 1m12.504106807s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1121 23:48:53.621962  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:54.127660  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:54.626400  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:55.122945  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:55.724403  251263 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:56.123402  251263 kapi.go:107] duration metric: took 1m11.506366647s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1121 23:48:56.125238  251263 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-266876 cluster.
	I1121 23:48:56.126693  251263 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1121 23:48:56.128133  251263 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1121 23:48:56.129655  251263 out.go:179] * Enabled addons: amd-gpu-device-plugin, inspektor-gadget, ingress-dns, registry-creds, nvidia-device-plugin, cloud-spanner, storage-provisioner, default-storageclass, metrics-server, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1121 23:48:56.131230  251263 addons.go:530] duration metric: took 1m25.815935443s for enable addons: enabled=[amd-gpu-device-plugin inspektor-gadget ingress-dns registry-creds nvidia-device-plugin cloud-spanner storage-provisioner default-storageclass metrics-server yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1121 23:48:56.131297  251263 start.go:247] waiting for cluster config update ...
	I1121 23:48:56.131318  251263 start.go:256] writing updated cluster config ...
	I1121 23:48:56.131603  251263 ssh_runner.go:195] Run: rm -f paused
	I1121 23:48:56.139138  251263 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 23:48:56.143255  251263 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-tgk67" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:48:56.149223  251263 pod_ready.go:94] pod "coredns-66bc5c9577-tgk67" is "Ready"
	I1121 23:48:56.149248  251263 pod_ready.go:86] duration metric: took 5.967724ms for pod "coredns-66bc5c9577-tgk67" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:48:56.152622  251263 pod_ready.go:83] waiting for pod "etcd-addons-266876" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:48:56.158325  251263 pod_ready.go:94] pod "etcd-addons-266876" is "Ready"
	I1121 23:48:56.158348  251263 pod_ready.go:86] duration metric: took 5.699178ms for pod "etcd-addons-266876" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:48:56.161017  251263 pod_ready.go:83] waiting for pod "kube-apiserver-addons-266876" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:48:56.165701  251263 pod_ready.go:94] pod "kube-apiserver-addons-266876" is "Ready"
	I1121 23:48:56.165731  251263 pod_ready.go:86] duration metric: took 4.68133ms for pod "kube-apiserver-addons-266876" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:48:56.167794  251263 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-266876" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:48:56.546100  251263 pod_ready.go:94] pod "kube-controller-manager-addons-266876" is "Ready"
	I1121 23:48:56.546140  251263 pod_ready.go:86] duration metric: took 378.321116ms for pod "kube-controller-manager-addons-266876" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:48:56.744763  251263 pod_ready.go:83] waiting for pod "kube-proxy-d6jsf" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:48:57.145028  251263 pod_ready.go:94] pod "kube-proxy-d6jsf" is "Ready"
	I1121 23:48:57.145065  251263 pod_ready.go:86] duration metric: took 400.263759ms for pod "kube-proxy-d6jsf" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:48:57.344109  251263 pod_ready.go:83] waiting for pod "kube-scheduler-addons-266876" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:48:57.744881  251263 pod_ready.go:94] pod "kube-scheduler-addons-266876" is "Ready"
	I1121 23:48:57.744924  251263 pod_ready.go:86] duration metric: took 400.779811ms for pod "kube-scheduler-addons-266876" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:48:57.744942  251263 pod_ready.go:40] duration metric: took 1.605761032s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 23:48:57.792759  251263 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1121 23:48:57.794548  251263 out.go:179] * Done! kubectl is now configured to use "addons-266876" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 21 23:54:39 addons-266876 crio[816]: time="2025-11-21 23:54:39.150607945Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763769279150580877,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:532066,},InodesUsed:&UInt64Value{Value:186,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f37397f9-ca4f-47cb-9b40-d40326f7fd0b name=/runtime.v1.ImageService/ImageFsInfo
	Nov 21 23:54:39 addons-266876 crio[816]: time="2025-11-21 23:54:39.151657518Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c0b72486-1982-4956-8b16-24f78f7ef3db name=/runtime.v1.RuntimeService/ListContainers
	Nov 21 23:54:39 addons-266876 crio[816]: time="2025-11-21 23:54:39.151767433Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c0b72486-1982-4956-8b16-24f78f7ef3db name=/runtime.v1.RuntimeService/ListContainers
	Nov 21 23:54:39 addons-266876 crio[816]: time="2025-11-21 23:54:39.152424523Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:991f92b0bd577e0738eef35d65b7c9638d3df53ccea140bf89bfa778f911574f,PodSandboxId:f7f9ecdee49d2fbed73c95266531e98528194cc29d1af1e15c07c6a5e790026a,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1763768959705557696,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9e3ed8c7-5788-4d41-aba1-71043fc65fb1,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1205f66bfddc482ac5d5dd1e86224c67e867616e61123b413f3bb6856473dc12,PodSandboxId:7a5080c12c12abdaab939ff714e1e919b116a416135fb5a4c0954b2d73aa77d6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1763768941325630164,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6b5956ac-11bb-458f-953a-f0fa68bf575e,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51813a3108d9e54ce1c3496176ac5114e7bd1188f2c3673f4a4a3480910eced6,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1763768914769733150,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:491a8ff7c586acede1f8b3b37821df605946465cc57d997a527260e81bc84cbe,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1763768913042350736,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62345e24511bafa136f68a223ce7ed0c511a449ccaac17d536939d218364c8e0,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1763768911235493555,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a47
1-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c36592147c998eee903b15108ec385188d6a10ba82bdbc75a1e806aedb354e7,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1763768910213861282,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b
a1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:552ab85d759ae6b592b4d62982e120e0f046fdad6cf73d39fa8e079973301b19,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1763768908471046022,Labels:map[string]string{io.kuber
netes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b904d30a44673800c0c3034a976f6ac03bbb3ec299f6d92bb1a5c6ea170a7c57,PodSandboxId:ff134a61cd64ed6f5542d7c8c8469ae269bd32b9fdf955f510ccf7e0b5589fb8,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:17637689071836483
11,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7dfe8e9b-0142-42cf-ba29-27aeadf91605,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4683ce225f87d35eb79e6cefdb9d7c48be7cc40e3230995b8b0843525d0bd27,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba
9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1763768905559322925,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea2e4d571c23b9bf7d4bfb72e3338c12264469ce70e7e72754afd69967515d13,PodSandboxId:5007bb0b80f021b12e7f6a9a43425ddf2f36685c53a522cd08d8b714c4ec20eb,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler
:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1763768903867338270,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79de7084-4282-49f8-a4d1-582323611ce3,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37dea366f964b8791137712d958e380263c762d6943592d23f145fad119cd6b5,PodSandboxId:1e73211f223b9745771ca2d0de7f25252821645aa5ecdbedb9590018486b8b7e,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1763768900663296805,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-gcprx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38cf49f5-ed6e-4aa5-bdfe-2494e5763f39,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16f748bb4b27c533f7b87016c8a98346dcf32afaef8d230d73d0764252cbb72f,PodSandboxId:1c267215c3e5b6dac05194130ddb58527757745ce779475a28dd1c84610883b7,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079
b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1763768900552896022,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-r57wx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 136bb70d-9950-46db-83d9-09b543dc4f72,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe7bb60492b04fd1b087025749469403adeef011272e8f3f22c00ada731cdcb3,PodSandboxId:15b64b5856939f8bac45fb57619eff6fafc932dc05ba97fe1943560b384bd630,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Ima
ge:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1763768898866225812,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-vl5f9,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 09a904d5-755f-4f1f-9525-b10e4e4b57a7,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62fac18e2a4ec25dc356850c017cf9e035abf1bb8a3d580cf0053ef797061409,PodSandboxId:f1662e37013476f1ac5ede7d21406a137d8e8672c36dcf49068d30172dc639f2,Metadata:&Container
Metadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763768860250918214,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2855a3de-b990-447c-b094-274b5becf1da,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d414f30f9b272529b258344187ca2317cbc5a4141f4ffe1bc6fa0f7df80bd5bb,PodSandboxId:79f2d64c3813a612bcd08d986b458be1fc1d2f0a3922d4db70b605b18db55f18,Metadata:&ContainerMetadata{Nam
e:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1763768857987378401,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-pd4sx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88fffae7-a3c2-46ef-a382-867c1f45dd2f,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e880e3438bfbb021c8b745fd3f9eff8bd901a71ba1ac5890af86a6f9ccce7198,PodSandboxId:9607023c4fe8e371d977e1e4a2b52b0e80675b763c0cd2b3ae209db0
0b96f2cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763768852094801650,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-tgk67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad56ae13-a7c4-44e3-a817-73aa300110b6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.cont
ainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ba59e7c8953d9801a0002a71d6b0d76ceb61d8240d18301e56bb626f422fb40,PodSandboxId:1ce41f042f494eef5d0be46b1db7d599bafdcc97fef69e0d7f1782bd275c54ee,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763768851042427654,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d6jsf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c9f1dbf-19b7-4f19-8f33-11b6886f1237,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernet
es.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d89e7dd43a03bbcb90d35b0084800be3d656bfa6da9cea1a1cc88c7dc51493d,PodSandboxId:a6e11d2b9834f78610e1487003d17adb09fe565622e235df1217303546294639,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763768839147876729,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-266876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d9c952f476472031bed61db83e3c978,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\
":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c5891e44197cae1d6761a70d34df341e353e30c01b0d5a7326a9560ace3813e,PodSandboxId:212b2600cae8f28bb69999f005d0b52383e46a97173ab430efc504c81b7dd8ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763768839130699997,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-266876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ca8736316ec035d06c4ec08eb70b85a,},Annotations:map[string]string{io.kubernetes.container.hash: e
9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b2349c8754b0dab18ba46ec5bfeab82bbf19463f511389a62129f35aebcece7,PodSandboxId:7fb7e928bee471d4ec16ce9fb677f233172c530080bee3cf5baf5b1c28826973,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763768839107116372,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-266876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: e0a4452486189aa3ae5b593dc3a43cac,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a216f1821ac9afeecc64f730374536676eea6c11ca4c8f9bde7913e3b74b219,PodSandboxId:43d68a4f9086a85d2e8c987fa17f958b95725f1d9f37b716010e99d58676be1a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763768839078348144,Labels:map[string]string{io.kubernetes.container.name: kube-contro
ller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-266876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14e0cb0275164bea778c164d97826c53,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c0b72486-1982-4956-8b16-24f78f7ef3db name=/runtime.v1.RuntimeService/ListContainers
	Nov 21 23:54:39 addons-266876 crio[816]: time="2025-11-21 23:54:39.190728264Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cc68467e-2dac-49a9-a901-a76a6f51d8e5 name=/runtime.v1.RuntimeService/Version
	Nov 21 23:54:39 addons-266876 crio[816]: time="2025-11-21 23:54:39.190967796Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cc68467e-2dac-49a9-a901-a76a6f51d8e5 name=/runtime.v1.RuntimeService/Version
	Nov 21 23:54:39 addons-266876 crio[816]: time="2025-11-21 23:54:39.192649070Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f7d709fd-dea4-4e1e-bb02-597fa7fa7855 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 21 23:54:39 addons-266876 crio[816]: time="2025-11-21 23:54:39.195775801Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763769279195748485,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:532066,},InodesUsed:&UInt64Value{Value:186,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f7d709fd-dea4-4e1e-bb02-597fa7fa7855 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 21 23:54:39 addons-266876 crio[816]: time="2025-11-21 23:54:39.196881653Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=27180106-b97d-4fc4-a514-79fe591711d7 name=/runtime.v1.RuntimeService/ListContainers
	Nov 21 23:54:39 addons-266876 crio[816]: time="2025-11-21 23:54:39.197166985Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=27180106-b97d-4fc4-a514-79fe591711d7 name=/runtime.v1.RuntimeService/ListContainers
	Nov 21 23:54:39 addons-266876 crio[816]: time="2025-11-21 23:54:39.198228309Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:991f92b0bd577e0738eef35d65b7c9638d3df53ccea140bf89bfa778f911574f,PodSandboxId:f7f9ecdee49d2fbed73c95266531e98528194cc29d1af1e15c07c6a5e790026a,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1763768959705557696,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9e3ed8c7-5788-4d41-aba1-71043fc65fb1,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1205f66bfddc482ac5d5dd1e86224c67e867616e61123b413f3bb6856473dc12,PodSandboxId:7a5080c12c12abdaab939ff714e1e919b116a416135fb5a4c0954b2d73aa77d6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1763768941325630164,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6b5956ac-11bb-458f-953a-f0fa68bf575e,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51813a3108d9e54ce1c3496176ac5114e7bd1188f2c3673f4a4a3480910eced6,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1763768914769733150,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:491a8ff7c586acede1f8b3b37821df605946465cc57d997a527260e81bc84cbe,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1763768913042350736,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62345e24511bafa136f68a223ce7ed0c511a449ccaac17d536939d218364c8e0,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1763768911235493555,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a47
1-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c36592147c998eee903b15108ec385188d6a10ba82bdbc75a1e806aedb354e7,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1763768910213861282,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b
a1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:552ab85d759ae6b592b4d62982e120e0f046fdad6cf73d39fa8e079973301b19,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1763768908471046022,Labels:map[string]string{io.kuber
netes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b904d30a44673800c0c3034a976f6ac03bbb3ec299f6d92bb1a5c6ea170a7c57,PodSandboxId:ff134a61cd64ed6f5542d7c8c8469ae269bd32b9fdf955f510ccf7e0b5589fb8,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:17637689071836483
11,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7dfe8e9b-0142-42cf-ba29-27aeadf91605,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4683ce225f87d35eb79e6cefdb9d7c48be7cc40e3230995b8b0843525d0bd27,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba
9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1763768905559322925,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea2e4d571c23b9bf7d4bfb72e3338c12264469ce70e7e72754afd69967515d13,PodSandboxId:5007bb0b80f021b12e7f6a9a43425ddf2f36685c53a522cd08d8b714c4ec20eb,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler
:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1763768903867338270,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79de7084-4282-49f8-a4d1-582323611ce3,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37dea366f964b8791137712d958e380263c762d6943592d23f145fad119cd6b5,PodSandboxId:1e73211f223b9745771ca2d0de7f25252821645aa5ecdbedb9590018486b8b7e,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1763768900663296805,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-gcprx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38cf49f5-ed6e-4aa5-bdfe-2494e5763f39,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16f748bb4b27c533f7b87016c8a98346dcf32afaef8d230d73d0764252cbb72f,PodSandboxId:1c267215c3e5b6dac05194130ddb58527757745ce779475a28dd1c84610883b7,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079
b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1763768900552896022,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-r57wx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 136bb70d-9950-46db-83d9-09b543dc4f72,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe7bb60492b04fd1b087025749469403adeef011272e8f3f22c00ada731cdcb3,PodSandboxId:15b64b5856939f8bac45fb57619eff6fafc932dc05ba97fe1943560b384bd630,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Ima
ge:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1763768898866225812,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-vl5f9,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 09a904d5-755f-4f1f-9525-b10e4e4b57a7,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62fac18e2a4ec25dc356850c017cf9e035abf1bb8a3d580cf0053ef797061409,PodSandboxId:f1662e37013476f1ac5ede7d21406a137d8e8672c36dcf49068d30172dc639f2,Metadata:&Container
Metadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763768860250918214,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2855a3de-b990-447c-b094-274b5becf1da,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d414f30f9b272529b258344187ca2317cbc5a4141f4ffe1bc6fa0f7df80bd5bb,PodSandboxId:79f2d64c3813a612bcd08d986b458be1fc1d2f0a3922d4db70b605b18db55f18,Metadata:&ContainerMetadata{Nam
e:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1763768857987378401,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-pd4sx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88fffae7-a3c2-46ef-a382-867c1f45dd2f,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e880e3438bfbb021c8b745fd3f9eff8bd901a71ba1ac5890af86a6f9ccce7198,PodSandboxId:9607023c4fe8e371d977e1e4a2b52b0e80675b763c0cd2b3ae209db0
0b96f2cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763768852094801650,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-tgk67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad56ae13-a7c4-44e3-a817-73aa300110b6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.cont
ainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ba59e7c8953d9801a0002a71d6b0d76ceb61d8240d18301e56bb626f422fb40,PodSandboxId:1ce41f042f494eef5d0be46b1db7d599bafdcc97fef69e0d7f1782bd275c54ee,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763768851042427654,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d6jsf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c9f1dbf-19b7-4f19-8f33-11b6886f1237,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernet
es.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d89e7dd43a03bbcb90d35b0084800be3d656bfa6da9cea1a1cc88c7dc51493d,PodSandboxId:a6e11d2b9834f78610e1487003d17adb09fe565622e235df1217303546294639,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763768839147876729,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-266876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d9c952f476472031bed61db83e3c978,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\
":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c5891e44197cae1d6761a70d34df341e353e30c01b0d5a7326a9560ace3813e,PodSandboxId:212b2600cae8f28bb69999f005d0b52383e46a97173ab430efc504c81b7dd8ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763768839130699997,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-266876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ca8736316ec035d06c4ec08eb70b85a,},Annotations:map[string]string{io.kubernetes.container.hash: e
9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b2349c8754b0dab18ba46ec5bfeab82bbf19463f511389a62129f35aebcece7,PodSandboxId:7fb7e928bee471d4ec16ce9fb677f233172c530080bee3cf5baf5b1c28826973,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763768839107116372,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-266876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: e0a4452486189aa3ae5b593dc3a43cac,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a216f1821ac9afeecc64f730374536676eea6c11ca4c8f9bde7913e3b74b219,PodSandboxId:43d68a4f9086a85d2e8c987fa17f958b95725f1d9f37b716010e99d58676be1a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763768839078348144,Labels:map[string]string{io.kubernetes.container.name: kube-contro
ller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-266876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14e0cb0275164bea778c164d97826c53,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=27180106-b97d-4fc4-a514-79fe591711d7 name=/runtime.v1.RuntimeService/ListContainers
	Nov 21 23:54:39 addons-266876 crio[816]: time="2025-11-21 23:54:39.234708001Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1ca440f3-c928-4002-ba66-defed70fbc01 name=/runtime.v1.RuntimeService/Version
	Nov 21 23:54:39 addons-266876 crio[816]: time="2025-11-21 23:54:39.234778793Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1ca440f3-c928-4002-ba66-defed70fbc01 name=/runtime.v1.RuntimeService/Version
	Nov 21 23:54:39 addons-266876 crio[816]: time="2025-11-21 23:54:39.236189282Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=26305365-db1d-4780-8b43-d3692dab3091 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 21 23:54:39 addons-266876 crio[816]: time="2025-11-21 23:54:39.237365958Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763769279237341380,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:532066,},InodesUsed:&UInt64Value{Value:186,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=26305365-db1d-4780-8b43-d3692dab3091 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 21 23:54:39 addons-266876 crio[816]: time="2025-11-21 23:54:39.238645827Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bd9a1484-3d33-4247-a7f1-c98516a3f5f0 name=/runtime.v1.RuntimeService/ListContainers
	Nov 21 23:54:39 addons-266876 crio[816]: time="2025-11-21 23:54:39.238706285Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bd9a1484-3d33-4247-a7f1-c98516a3f5f0 name=/runtime.v1.RuntimeService/ListContainers
	Nov 21 23:54:39 addons-266876 crio[816]: time="2025-11-21 23:54:39.239398395Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:991f92b0bd577e0738eef35d65b7c9638d3df53ccea140bf89bfa778f911574f,PodSandboxId:f7f9ecdee49d2fbed73c95266531e98528194cc29d1af1e15c07c6a5e790026a,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1763768959705557696,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9e3ed8c7-5788-4d41-aba1-71043fc65fb1,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1205f66bfddc482ac5d5dd1e86224c67e867616e61123b413f3bb6856473dc12,PodSandboxId:7a5080c12c12abdaab939ff714e1e919b116a416135fb5a4c0954b2d73aa77d6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1763768941325630164,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6b5956ac-11bb-458f-953a-f0fa68bf575e,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51813a3108d9e54ce1c3496176ac5114e7bd1188f2c3673f4a4a3480910eced6,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1763768914769733150,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:491a8ff7c586acede1f8b3b37821df605946465cc57d997a527260e81bc84cbe,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1763768913042350736,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62345e24511bafa136f68a223ce7ed0c511a449ccaac17d536939d218364c8e0,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1763768911235493555,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a47
1-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c36592147c998eee903b15108ec385188d6a10ba82bdbc75a1e806aedb354e7,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1763768910213861282,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b
a1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:552ab85d759ae6b592b4d62982e120e0f046fdad6cf73d39fa8e079973301b19,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1763768908471046022,Labels:map[string]string{io.kuber
netes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b904d30a44673800c0c3034a976f6ac03bbb3ec299f6d92bb1a5c6ea170a7c57,PodSandboxId:ff134a61cd64ed6f5542d7c8c8469ae269bd32b9fdf955f510ccf7e0b5589fb8,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:17637689071836483
11,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7dfe8e9b-0142-42cf-ba29-27aeadf91605,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4683ce225f87d35eb79e6cefdb9d7c48be7cc40e3230995b8b0843525d0bd27,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba
9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1763768905559322925,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea2e4d571c23b9bf7d4bfb72e3338c12264469ce70e7e72754afd69967515d13,PodSandboxId:5007bb0b80f021b12e7f6a9a43425ddf2f36685c53a522cd08d8b714c4ec20eb,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler
:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1763768903867338270,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79de7084-4282-49f8-a4d1-582323611ce3,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37dea366f964b8791137712d958e380263c762d6943592d23f145fad119cd6b5,PodSandboxId:1e73211f223b9745771ca2d0de7f25252821645aa5ecdbedb9590018486b8b7e,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1763768900663296805,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-gcprx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38cf49f5-ed6e-4aa5-bdfe-2494e5763f39,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16f748bb4b27c533f7b87016c8a98346dcf32afaef8d230d73d0764252cbb72f,PodSandboxId:1c267215c3e5b6dac05194130ddb58527757745ce779475a28dd1c84610883b7,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079
b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1763768900552896022,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-r57wx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 136bb70d-9950-46db-83d9-09b543dc4f72,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe7bb60492b04fd1b087025749469403adeef011272e8f3f22c00ada731cdcb3,PodSandboxId:15b64b5856939f8bac45fb57619eff6fafc932dc05ba97fe1943560b384bd630,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Ima
ge:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1763768898866225812,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-vl5f9,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 09a904d5-755f-4f1f-9525-b10e4e4b57a7,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62fac18e2a4ec25dc356850c017cf9e035abf1bb8a3d580cf0053ef797061409,PodSandboxId:f1662e37013476f1ac5ede7d21406a137d8e8672c36dcf49068d30172dc639f2,Metadata:&Container
Metadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763768860250918214,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2855a3de-b990-447c-b094-274b5becf1da,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d414f30f9b272529b258344187ca2317cbc5a4141f4ffe1bc6fa0f7df80bd5bb,PodSandboxId:79f2d64c3813a612bcd08d986b458be1fc1d2f0a3922d4db70b605b18db55f18,Metadata:&ContainerMetadata{Nam
e:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1763768857987378401,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-pd4sx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88fffae7-a3c2-46ef-a382-867c1f45dd2f,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e880e3438bfbb021c8b745fd3f9eff8bd901a71ba1ac5890af86a6f9ccce7198,PodSandboxId:9607023c4fe8e371d977e1e4a2b52b0e80675b763c0cd2b3ae209db0
0b96f2cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763768852094801650,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-tgk67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad56ae13-a7c4-44e3-a817-73aa300110b6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.cont
ainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ba59e7c8953d9801a0002a71d6b0d76ceb61d8240d18301e56bb626f422fb40,PodSandboxId:1ce41f042f494eef5d0be46b1db7d599bafdcc97fef69e0d7f1782bd275c54ee,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763768851042427654,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d6jsf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c9f1dbf-19b7-4f19-8f33-11b6886f1237,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernet
es.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d89e7dd43a03bbcb90d35b0084800be3d656bfa6da9cea1a1cc88c7dc51493d,PodSandboxId:a6e11d2b9834f78610e1487003d17adb09fe565622e235df1217303546294639,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763768839147876729,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-266876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d9c952f476472031bed61db83e3c978,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\
":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c5891e44197cae1d6761a70d34df341e353e30c01b0d5a7326a9560ace3813e,PodSandboxId:212b2600cae8f28bb69999f005d0b52383e46a97173ab430efc504c81b7dd8ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763768839130699997,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-266876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ca8736316ec035d06c4ec08eb70b85a,},Annotations:map[string]string{io.kubernetes.container.hash: e
9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b2349c8754b0dab18ba46ec5bfeab82bbf19463f511389a62129f35aebcece7,PodSandboxId:7fb7e928bee471d4ec16ce9fb677f233172c530080bee3cf5baf5b1c28826973,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763768839107116372,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-266876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: e0a4452486189aa3ae5b593dc3a43cac,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a216f1821ac9afeecc64f730374536676eea6c11ca4c8f9bde7913e3b74b219,PodSandboxId:43d68a4f9086a85d2e8c987fa17f958b95725f1d9f37b716010e99d58676be1a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763768839078348144,Labels:map[string]string{io.kubernetes.container.name: kube-contro
ller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-266876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14e0cb0275164bea778c164d97826c53,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bd9a1484-3d33-4247-a7f1-c98516a3f5f0 name=/runtime.v1.RuntimeService/ListContainers
	Nov 21 23:54:39 addons-266876 crio[816]: time="2025-11-21 23:54:39.275505657Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=56d379c7-374f-4142-9878-9f5e93be75e5 name=/runtime.v1.RuntimeService/Version
	Nov 21 23:54:39 addons-266876 crio[816]: time="2025-11-21 23:54:39.275590889Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=56d379c7-374f-4142-9878-9f5e93be75e5 name=/runtime.v1.RuntimeService/Version
	Nov 21 23:54:39 addons-266876 crio[816]: time="2025-11-21 23:54:39.277124238Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c628a00c-fe61-4407-8de3-585d0c9d8569 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 21 23:54:39 addons-266876 crio[816]: time="2025-11-21 23:54:39.278341339Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763769279278311438,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:532066,},InodesUsed:&UInt64Value{Value:186,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c628a00c-fe61-4407-8de3-585d0c9d8569 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 21 23:54:39 addons-266876 crio[816]: time="2025-11-21 23:54:39.279634265Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a14aba5c-3f45-4465-b21e-0d91f6abd35b name=/runtime.v1.RuntimeService/ListContainers
	Nov 21 23:54:39 addons-266876 crio[816]: time="2025-11-21 23:54:39.279700567Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a14aba5c-3f45-4465-b21e-0d91f6abd35b name=/runtime.v1.RuntimeService/ListContainers
	Nov 21 23:54:39 addons-266876 crio[816]: time="2025-11-21 23:54:39.280214803Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:991f92b0bd577e0738eef35d65b7c9638d3df53ccea140bf89bfa778f911574f,PodSandboxId:f7f9ecdee49d2fbed73c95266531e98528194cc29d1af1e15c07c6a5e790026a,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1763768959705557696,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9e3ed8c7-5788-4d41-aba1-71043fc65fb1,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1205f66bfddc482ac5d5dd1e86224c67e867616e61123b413f3bb6856473dc12,PodSandboxId:7a5080c12c12abdaab939ff714e1e919b116a416135fb5a4c0954b2d73aa77d6,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1763768941325630164,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6b5956ac-11bb-458f-953a-f0fa68bf575e,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51813a3108d9e54ce1c3496176ac5114e7bd1188f2c3673f4a4a3480910eced6,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1763768914769733150,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:491a8ff7c586acede1f8b3b37821df605946465cc57d997a527260e81bc84cbe,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1763768913042350736,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62345e24511bafa136f68a223ce7ed0c511a449ccaac17d536939d218364c8e0,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1763768911235493555,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a47
1-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c36592147c998eee903b15108ec385188d6a10ba82bdbc75a1e806aedb354e7,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1763768910213861282,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b
a1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:552ab85d759ae6b592b4d62982e120e0f046fdad6cf73d39fa8e079973301b19,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1763768908471046022,Labels:map[string]string{io.kuber
netes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b904d30a44673800c0c3034a976f6ac03bbb3ec299f6d92bb1a5c6ea170a7c57,PodSandboxId:ff134a61cd64ed6f5542d7c8c8469ae269bd32b9fdf955f510ccf7e0b5589fb8,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:17637689071836483
11,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7dfe8e9b-0142-42cf-ba29-27aeadf91605,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4683ce225f87d35eb79e6cefdb9d7c48be7cc40e3230995b8b0843525d0bd27,PodSandboxId:d6163d79acc662a629113e13beede449ac9bce19bc7d48a2544b81c290a6a5f5,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba
9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1763768905559322925,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-gvwq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1c57b2-7cf3-4c5f-a471-bf466fbddc0d,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea2e4d571c23b9bf7d4bfb72e3338c12264469ce70e7e72754afd69967515d13,PodSandboxId:5007bb0b80f021b12e7f6a9a43425ddf2f36685c53a522cd08d8b714c4ec20eb,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler
:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1763768903867338270,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79de7084-4282-49f8-a4d1-582323611ce3,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37dea366f964b8791137712d958e380263c762d6943592d23f145fad119cd6b5,PodSandboxId:1e73211f223b9745771ca2d0de7f25252821645aa5ecdbedb9590018486b8b7e,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1763768900663296805,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-gcprx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38cf49f5-ed6e-4aa5-bdfe-2494e5763f39,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16f748bb4b27c533f7b87016c8a98346dcf32afaef8d230d73d0764252cbb72f,PodSandboxId:1c267215c3e5b6dac05194130ddb58527757745ce779475a28dd1c84610883b7,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079
b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1763768900552896022,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-r57wx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 136bb70d-9950-46db-83d9-09b543dc4f72,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe7bb60492b04fd1b087025749469403adeef011272e8f3f22c00ada731cdcb3,PodSandboxId:15b64b5856939f8bac45fb57619eff6fafc932dc05ba97fe1943560b384bd630,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Ima
ge:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1763768898866225812,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-vl5f9,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 09a904d5-755f-4f1f-9525-b10e4e4b57a7,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62fac18e2a4ec25dc356850c017cf9e035abf1bb8a3d580cf0053ef797061409,PodSandboxId:f1662e37013476f1ac5ede7d21406a137d8e8672c36dcf49068d30172dc639f2,Metadata:&Container
Metadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763768860250918214,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2855a3de-b990-447c-b094-274b5becf1da,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d414f30f9b272529b258344187ca2317cbc5a4141f4ffe1bc6fa0f7df80bd5bb,PodSandboxId:79f2d64c3813a612bcd08d986b458be1fc1d2f0a3922d4db70b605b18db55f18,Metadata:&ContainerMetadata{Nam
e:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1763768857987378401,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-pd4sx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88fffae7-a3c2-46ef-a382-867c1f45dd2f,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e880e3438bfbb021c8b745fd3f9eff8bd901a71ba1ac5890af86a6f9ccce7198,PodSandboxId:9607023c4fe8e371d977e1e4a2b52b0e80675b763c0cd2b3ae209db0
0b96f2cc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763768852094801650,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-tgk67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad56ae13-a7c4-44e3-a817-73aa300110b6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.cont
ainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ba59e7c8953d9801a0002a71d6b0d76ceb61d8240d18301e56bb626f422fb40,PodSandboxId:1ce41f042f494eef5d0be46b1db7d599bafdcc97fef69e0d7f1782bd275c54ee,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763768851042427654,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d6jsf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c9f1dbf-19b7-4f19-8f33-11b6886f1237,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernet
es.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d89e7dd43a03bbcb90d35b0084800be3d656bfa6da9cea1a1cc88c7dc51493d,PodSandboxId:a6e11d2b9834f78610e1487003d17adb09fe565622e235df1217303546294639,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763768839147876729,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-266876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d9c952f476472031bed61db83e3c978,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\
":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c5891e44197cae1d6761a70d34df341e353e30c01b0d5a7326a9560ace3813e,PodSandboxId:212b2600cae8f28bb69999f005d0b52383e46a97173ab430efc504c81b7dd8ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763768839130699997,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-266876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ca8736316ec035d06c4ec08eb70b85a,},Annotations:map[string]string{io.kubernetes.container.hash: e
9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b2349c8754b0dab18ba46ec5bfeab82bbf19463f511389a62129f35aebcece7,PodSandboxId:7fb7e928bee471d4ec16ce9fb677f233172c530080bee3cf5baf5b1c28826973,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763768839107116372,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-266876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: e0a4452486189aa3ae5b593dc3a43cac,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a216f1821ac9afeecc64f730374536676eea6c11ca4c8f9bde7913e3b74b219,PodSandboxId:43d68a4f9086a85d2e8c987fa17f958b95725f1d9f37b716010e99d58676be1a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763768839078348144,Labels:map[string]string{io.kubernetes.container.name: kube-contro
ller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-266876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14e0cb0275164bea778c164d97826c53,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a14aba5c-3f45-4465-b21e-0d91f6abd35b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD                                       NAMESPACE
	991f92b0bd577       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                                              5 minutes ago       Running             nginx                                    0                   f7f9ecdee49d2       nginx                                     default
	1205f66bfddc4       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          5 minutes ago       Running             busybox                                  0                   7a5080c12c12a       busybox                                   default
	51813a3108d9e       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          6 minutes ago       Running             csi-snapshotter                          0                   d6163d79acc66       csi-hostpathplugin-gvwq9                  kube-system
	491a8ff7c586a       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          6 minutes ago       Running             csi-provisioner                          0                   d6163d79acc66       csi-hostpathplugin-gvwq9                  kube-system
	62345e24511ba       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            6 minutes ago       Running             liveness-probe                           0                   d6163d79acc66       csi-hostpathplugin-gvwq9                  kube-system
	4c36592147c99       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           6 minutes ago       Running             hostpath                                 0                   d6163d79acc66       csi-hostpathplugin-gvwq9                  kube-system
	552ab85d759ae       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                6 minutes ago       Running             node-driver-registrar                    0                   d6163d79acc66       csi-hostpathplugin-gvwq9                  kube-system
	b904d30a44673       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             6 minutes ago       Running             csi-attacher                             0                   ff134a61cd64e       csi-hostpath-attacher-0                   kube-system
	b4683ce225f87       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   6 minutes ago       Running             csi-external-health-monitor-controller   0                   d6163d79acc66       csi-hostpathplugin-gvwq9                  kube-system
	ea2e4d571c23b       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              6 minutes ago       Running             csi-resizer                              0                   5007bb0b80f02       csi-hostpath-resizer-0                    kube-system
	37dea366f964b       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      6 minutes ago       Running             volume-snapshot-controller               0                   1e73211f223b9       snapshot-controller-7d9fbc56b8-gcprx      kube-system
	16f748bb4b27c       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      6 minutes ago       Running             volume-snapshot-controller               0                   1c267215c3e5b       snapshot-controller-7d9fbc56b8-r57wx      kube-system
	fe7bb60492b04       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             6 minutes ago       Running             local-path-provisioner                   0                   15b64b5856939       local-path-provisioner-648f6765c9-vl5f9   local-path-storage
	62fac18e2a4ec       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             6 minutes ago       Running             storage-provisioner                      0                   f1662e3701347       storage-provisioner                       kube-system
	d414f30f9b272       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     7 minutes ago       Running             amd-gpu-device-plugin                    0                   79f2d64c3813a       amd-gpu-device-plugin-pd4sx               kube-system
	e880e3438bfbb       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             7 minutes ago       Running             coredns                                  0                   9607023c4fe8e       coredns-66bc5c9577-tgk67                  kube-system
	9ba59e7c8953d       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             7 minutes ago       Running             kube-proxy                               0                   1ce41f042f494       kube-proxy-d6jsf                          kube-system
	8d89e7dd43a03       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             7 minutes ago       Running             kube-scheduler                           0                   a6e11d2b9834f       kube-scheduler-addons-266876              kube-system
	5c5891e44197c       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             7 minutes ago       Running             etcd                                     0                   212b2600cae8f       etcd-addons-266876                        kube-system
	9b2349c8754b0       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             7 minutes ago       Running             kube-apiserver                           0                   7fb7e928bee47       kube-apiserver-addons-266876              kube-system
	3a216f1821ac9       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             7 minutes ago       Running             kube-controller-manager                  0                   43d68a4f9086a       kube-controller-manager-addons-266876     kube-system
	
	
	==> coredns [e880e3438bfbb021c8b745fd3f9eff8bd901a71ba1ac5890af86a6f9ccce7198] <==
	[INFO] 10.244.0.22:50347 - 35168 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000108631s
	[INFO] 10.244.0.22:50347 - 54877 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000216081s
	[INFO] 10.244.0.22:50347 - 43294 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000179721s
	[INFO] 10.244.0.22:50347 - 27330 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000599569s
	[INFO] 10.244.0.22:60336 - 12888 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00056772s
	[INFO] 10.244.0.22:60336 - 36795 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000091403s
	[INFO] 10.244.0.22:60336 - 25266 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000099994s
	[INFO] 10.244.0.22:60336 - 33320 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000197786s
	[INFO] 10.244.0.22:60336 - 45387 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000148602s
	[INFO] 10.244.0.22:60336 - 51395 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000280512s
	[INFO] 10.244.0.22:60336 - 17580 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00009954s
	[INFO] 10.244.0.22:39320 - 9584 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000406115s
	[INFO] 10.244.0.22:53756 - 38444 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000126906s
	[INFO] 10.244.0.22:39320 - 47717 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000763646s
	[INFO] 10.244.0.22:39320 - 18735 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000161717s
	[INFO] 10.244.0.22:39320 - 58264 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000251885s
	[INFO] 10.244.0.22:39320 - 54900 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000138827s
	[INFO] 10.244.0.22:39320 - 7817 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000170117s
	[INFO] 10.244.0.22:53756 - 5097 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000284667s
	[INFO] 10.244.0.22:39320 - 60449 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000242395s
	[INFO] 10.244.0.22:53756 - 6963 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000095016s
	[INFO] 10.244.0.22:53756 - 9121 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000092889s
	[INFO] 10.244.0.22:53756 - 50282 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000104129s
	[INFO] 10.244.0.22:53756 - 63714 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000089905s
	[INFO] 10.244.0.22:53756 - 29550 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000102334s
	
	
	==> describe nodes <==
	Name:               addons-266876
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-266876
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785
	                    minikube.k8s.io/name=addons-266876
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_21T23_47_26_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-266876
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-266876"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Nov 2025 23:47:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-266876
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Nov 2025 23:54:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Nov 2025 23:52:41 +0000   Fri, 21 Nov 2025 23:47:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Nov 2025 23:52:41 +0000   Fri, 21 Nov 2025 23:47:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Nov 2025 23:52:41 +0000   Fri, 21 Nov 2025 23:47:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Nov 2025 23:52:41 +0000   Fri, 21 Nov 2025 23:47:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.50
	  Hostname:    addons-266876
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 c4a95d5c27154bec8bc2a50909bf4217
	  System UUID:                c4a95d5c-2715-4bec-8bc2-a50909bf4217
	  Boot ID:                    7afcec11-c11b-4436-b252-c2dac139e51f
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (19 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m41s
	  default                     hello-world-app-5d498dc89-sqvxb                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m55s
	  default                     nginx                                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m23s
	  default                     task-pv-pod                                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m58s
	  kube-system                 amd-gpu-device-plugin-pd4sx                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m6s
	  kube-system                 coredns-66bc5c9577-tgk67                                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     7m9s
	  kube-system                 csi-hostpath-attacher-0                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m57s
	  kube-system                 csi-hostpath-resizer-0                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m57s
	  kube-system                 csi-hostpathplugin-gvwq9                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m57s
	  kube-system                 etcd-addons-266876                                            100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         7m15s
	  kube-system                 kube-apiserver-addons-266876                                  250m (12%)    0 (0%)      0 (0%)           0 (0%)         7m14s
	  kube-system                 kube-controller-manager-addons-266876                         200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m14s
	  kube-system                 kube-proxy-d6jsf                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m9s
	  kube-system                 kube-scheduler-addons-266876                                  100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m14s
	  kube-system                 snapshot-controller-7d9fbc56b8-gcprx                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m58s
	  kube-system                 snapshot-controller-7d9fbc56b8-r57wx                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m58s
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m2s
	  local-path-storage          helper-pod-create-pvc-b9a0f343-b333-4fad-87d0-620f3a86218a    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m45s
	  local-path-storage          local-path-provisioner-648f6765c9-vl5f9                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m7s                   kube-proxy       
	  Normal  Starting                 7m21s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m21s (x8 over 7m21s)  kubelet          Node addons-266876 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m21s (x8 over 7m21s)  kubelet          Node addons-266876 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m21s (x7 over 7m21s)  kubelet          Node addons-266876 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 7m14s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m14s                  kubelet          Node addons-266876 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m14s                  kubelet          Node addons-266876 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m14s                  kubelet          Node addons-266876 status is now: NodeHasSufficientPID
	  Normal  NodeReady                7m13s                  kubelet          Node addons-266876 status is now: NodeReady
	  Normal  RegisteredNode           7m10s                  node-controller  Node addons-266876 event: Registered Node addons-266876 in Controller
	
	
	==> dmesg <==
	[  +1.386553] kauditd_printk_skb: 314 callbacks suppressed
	[  +3.245635] kauditd_printk_skb: 404 callbacks suppressed
	[  +8.078733] kauditd_printk_skb: 5 callbacks suppressed
	[Nov21 23:48] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.490595] kauditd_printk_skb: 26 callbacks suppressed
	[  +6.260482] kauditd_printk_skb: 38 callbacks suppressed
	[  +5.041216] kauditd_printk_skb: 113 callbacks suppressed
	[  +5.004515] kauditd_printk_skb: 80 callbacks suppressed
	[  +3.836804] kauditd_printk_skb: 136 callbacks suppressed
	[  +4.200452] kauditd_printk_skb: 82 callbacks suppressed
	[  +0.000029] kauditd_printk_skb: 5 callbacks suppressed
	[  +0.000031] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.254098] kauditd_printk_skb: 53 callbacks suppressed
	[Nov21 23:49] kauditd_printk_skb: 47 callbacks suppressed
	[  +9.475817] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.686428] kauditd_printk_skb: 22 callbacks suppressed
	[  +4.598673] kauditd_printk_skb: 95 callbacks suppressed
	[  +1.253211] kauditd_printk_skb: 79 callbacks suppressed
	[  +0.652321] kauditd_printk_skb: 101 callbacks suppressed
	[  +0.000031] kauditd_printk_skb: 20 callbacks suppressed
	[  +9.880165] kauditd_printk_skb: 114 callbacks suppressed
	[Nov21 23:51] kauditd_printk_skb: 22 callbacks suppressed
	[  +0.811687] kauditd_printk_skb: 51 callbacks suppressed
	[  +3.142587] kauditd_printk_skb: 10 callbacks suppressed
	[Nov21 23:52] kauditd_printk_skb: 49 callbacks suppressed
	
	
	==> etcd [5c5891e44197cae1d6761a70d34df341e353e30c01b0d5a7326a9560ace3813e] <==
	{"level":"info","ts":"2025-11-21T23:47:56.169034Z","caller":"traceutil/trace.go:172","msg":"trace[503038924] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:933; }","duration":"252.581534ms","start":"2025-11-21T23:47:55.916448Z","end":"2025-11-21T23:47:56.169029Z","steps":["trace[503038924] 'agreement among raft nodes before linearized reading'  (duration: 252.55235ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-21T23:47:59.352083Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:47:59.363648Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:47:59.513514Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:47:59.589561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57252","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-21T23:48:08.513782Z","caller":"traceutil/trace.go:172","msg":"trace[715400112] transaction","detail":"{read_only:false; response_revision:978; number_of_response:1; }","duration":"116.418162ms","start":"2025-11-21T23:48:08.397351Z","end":"2025-11-21T23:48:08.513770Z","steps":["trace[715400112] 'process raft request'  (duration: 116.119443ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-21T23:48:10.824125Z","caller":"traceutil/trace.go:172","msg":"trace[2036679806] linearizableReadLoop","detail":"{readStateIndex:1014; appliedIndex:1014; }","duration":"203.849321ms","start":"2025-11-21T23:48:10.620261Z","end":"2025-11-21T23:48:10.824110Z","steps":["trace[2036679806] 'read index received'  (duration: 203.843953ms)","trace[2036679806] 'applied index is now lower than readState.Index'  (duration: 4.512µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-21T23:48:10.824235Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"203.952821ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-21T23:48:10.824255Z","caller":"traceutil/trace.go:172","msg":"trace[1038609178] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:986; }","duration":"203.992763ms","start":"2025-11-21T23:48:10.620257Z","end":"2025-11-21T23:48:10.824249Z","steps":["trace[1038609178] 'agreement among raft nodes before linearized reading'  (duration: 203.924903ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-21T23:48:10.827067Z","caller":"traceutil/trace.go:172","msg":"trace[958942931] transaction","detail":"{read_only:false; response_revision:987; number_of_response:1; }","duration":"216.790232ms","start":"2025-11-21T23:48:10.610267Z","end":"2025-11-21T23:48:10.827057Z","steps":["trace[958942931] 'process raft request'  (duration: 213.950708ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-21T23:48:17.235529Z","caller":"traceutil/trace.go:172","msg":"trace[2072959660] linearizableReadLoop","detail":"{readStateIndex:1040; appliedIndex:1040; }","duration":"118.859084ms","start":"2025-11-21T23:48:17.116651Z","end":"2025-11-21T23:48:17.235510Z","steps":["trace[2072959660] 'read index received'  (duration: 118.853824ms)","trace[2072959660] 'applied index is now lower than readState.Index'  (duration: 4.479µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-21T23:48:17.235633Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"118.964818ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-21T23:48:17.235650Z","caller":"traceutil/trace.go:172","msg":"trace[1291312129] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1011; }","duration":"118.997232ms","start":"2025-11-21T23:48:17.116647Z","end":"2025-11-21T23:48:17.235645Z","steps":["trace[1291312129] 'agreement among raft nodes before linearized reading'  (duration: 118.929178ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-21T23:48:17.236014Z","caller":"traceutil/trace.go:172","msg":"trace[409496112] transaction","detail":"{read_only:false; response_revision:1012; number_of_response:1; }","duration":"245.19274ms","start":"2025-11-21T23:48:16.990813Z","end":"2025-11-21T23:48:17.236006Z","steps":["trace[409496112] 'process raft request'  (duration: 245.052969ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-21T23:48:20.410362Z","caller":"traceutil/trace.go:172","msg":"trace[828505748] transaction","detail":"{read_only:false; response_revision:1027; number_of_response:1; }","duration":"157.893848ms","start":"2025-11-21T23:48:20.252456Z","end":"2025-11-21T23:48:20.410350Z","steps":["trace[828505748] 'process raft request'  (duration: 157.749487ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-21T23:48:26.972869Z","caller":"traceutil/trace.go:172","msg":"trace[583749754] transaction","detail":"{read_only:false; response_revision:1085; number_of_response:1; }","duration":"180.54926ms","start":"2025-11-21T23:48:26.792295Z","end":"2025-11-21T23:48:26.972845Z","steps":["trace[583749754] 'process raft request'  (duration: 180.444491ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-21T23:48:55.718332Z","caller":"traceutil/trace.go:172","msg":"trace[218102785] linearizableReadLoop","detail":"{readStateIndex:1235; appliedIndex:1235; }","duration":"102.447461ms","start":"2025-11-21T23:48:55.615863Z","end":"2025-11-21T23:48:55.718310Z","steps":["trace[218102785] 'read index received'  (duration: 102.442519ms)","trace[218102785] 'applied index is now lower than readState.Index'  (duration: 4.145µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-21T23:48:55.718517Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"102.662851ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-21T23:48:55.718556Z","caller":"traceutil/trace.go:172","msg":"trace[280205783] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1197; }","duration":"102.741104ms","start":"2025-11-21T23:48:55.615807Z","end":"2025-11-21T23:48:55.718548Z","steps":["trace[280205783] 'agreement among raft nodes before linearized reading'  (duration: 102.634025ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-21T23:48:55.718853Z","caller":"traceutil/trace.go:172","msg":"trace[1563407473] transaction","detail":"{read_only:false; response_revision:1198; number_of_response:1; }","duration":"160.082369ms","start":"2025-11-21T23:48:55.558762Z","end":"2025-11-21T23:48:55.718844Z","steps":["trace[1563407473] 'process raft request'  (duration: 160.006081ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-21T23:49:25.230279Z","caller":"traceutil/trace.go:172","msg":"trace[1746671191] transaction","detail":"{read_only:false; response_revision:1422; number_of_response:1; }","duration":"130.337483ms","start":"2025-11-21T23:49:25.099914Z","end":"2025-11-21T23:49:25.230251Z","steps":["trace[1746671191] 'process raft request'  (duration: 128.456166ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-21T23:49:31.443123Z","caller":"traceutil/trace.go:172","msg":"trace[1229097043] linearizableReadLoop","detail":"{readStateIndex:1512; appliedIndex:1512; }","duration":"121.2839ms","start":"2025-11-21T23:49:31.321821Z","end":"2025-11-21T23:49:31.443104Z","steps":["trace[1229097043] 'read index received'  (duration: 121.277728ms)","trace[1229097043] 'applied index is now lower than readState.Index'  (duration: 4.966µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-21T23:49:31.443287Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"121.446592ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-21T23:49:31.443311Z","caller":"traceutil/trace.go:172","msg":"trace[1460275697] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1465; }","duration":"121.507541ms","start":"2025-11-21T23:49:31.321797Z","end":"2025-11-21T23:49:31.443305Z","steps":["trace[1460275697] 'agreement among raft nodes before linearized reading'  (duration: 121.416565ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-21T23:49:31.444122Z","caller":"traceutil/trace.go:172","msg":"trace[1873839518] transaction","detail":"{read_only:false; response_revision:1466; number_of_response:1; }","duration":"152.736081ms","start":"2025-11-21T23:49:31.291375Z","end":"2025-11-21T23:49:31.444111Z","steps":["trace[1873839518] 'process raft request'  (duration: 152.523387ms)"],"step_count":1}
	
	
	==> kernel <==
	 23:54:39 up 7 min,  0 users,  load average: 0.22, 0.84, 0.62
	Linux addons-266876 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Nov 19 01:10:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [9b2349c8754b0dab18ba46ec5bfeab82bbf19463f511389a62129f35aebcece7] <==
	W1121 23:47:42.366369       1 logging.go:55] [core] [Channel #259 SubChannel #260]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1121 23:47:42.410842       1 logging.go:55] [core] [Channel #263 SubChannel #264]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1121 23:47:42.649275       1 alloc.go:328] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.103.205.35"}
	I1121 23:47:44.226888       1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.96.203.27"}
	W1121 23:47:59.343614       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1121 23:47:59.366318       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1121 23:47:59.513667       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1121 23:47:59.564438       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1121 23:48:11.667772       1 handler_proxy.go:99] no RequestInfo found in the context
	E1121 23:48:11.669231       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.6.129:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.6.129:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.6.129:443: connect: connection refused" logger="UnhandledError"
	E1121 23:48:11.670277       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1121 23:48:11.672393       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.6.129:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.6.129:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.6.129:443: connect: connection refused" logger="UnhandledError"
	E1121 23:48:11.677441       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.6.129:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.6.129:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.6.129:443: connect: connection refused" logger="UnhandledError"
	E1121 23:48:11.699611       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.6.129:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.6.129:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.6.129:443: connect: connection refused" logger="UnhandledError"
	I1121 23:48:11.830392       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1121 23:49:07.600969       1 conn.go:339] Error on socket receive: read tcp 192.168.39.50:8443->192.168.39.1:39632: use of closed network connection
	E1121 23:49:07.806030       1 conn.go:339] Error on socket receive: read tcp 192.168.39.50:8443->192.168.39.1:39646: use of closed network connection
	I1121 23:49:16.529402       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1121 23:49:16.732737       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.100.151.240"}
	I1121 23:49:17.182251       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.97.21.116"}
	I1121 23:50:12.699613       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1121 23:51:44.509660       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.111.217.27"}
	
	
	==> kube-controller-manager [3a216f1821ac9afeecc64f730374536676eea6c11ca4c8f9bde7913e3b74b219] <==
	I1121 23:47:29.337520       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1121 23:47:29.337577       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1121 23:47:29.337666       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1121 23:47:29.338254       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1121 23:47:29.338833       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1121 23:47:29.339107       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1121 23:47:29.340477       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1121 23:47:29.340506       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1121 23:47:29.341040       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1121 23:47:29.343803       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1121 23:47:29.357152       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1121 23:47:29.371508       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1121 23:47:37.577649       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1121 23:47:59.325689       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1121 23:47:59.326487       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1121 23:47:59.326701       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1121 23:47:59.433161       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1121 23:47:59.436132       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 23:47:59.460324       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1121 23:47:59.669783       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1121 23:49:21.118711       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gcp-auth"
	I1121 23:49:39.996446       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gadget"
	I1121 23:49:43.075346       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	I1121 23:49:50.779389       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	I1121 23:51:58.890395       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="ingress-nginx"
	
	
	==> kube-proxy [9ba59e7c8953d9801a0002a71d6b0d76ceb61d8240d18301e56bb626f422fb40] <==
	I1121 23:47:31.549237       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1121 23:47:31.651147       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1121 23:47:31.651198       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.50"]
	E1121 23:47:31.651275       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1121 23:47:31.974605       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1121 23:47:31.975156       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1121 23:47:31.975763       1 server_linux.go:132] "Using iptables Proxier"
	I1121 23:47:32.024377       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1121 23:47:32.026629       1 server.go:527] "Version info" version="v1.34.1"
	I1121 23:47:32.026711       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 23:47:32.034053       1 config.go:200] "Starting service config controller"
	I1121 23:47:32.034241       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1121 23:47:32.034262       1 config.go:106] "Starting endpoint slice config controller"
	I1121 23:47:32.034266       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1121 23:47:32.034276       1 config.go:403] "Starting serviceCIDR config controller"
	I1121 23:47:32.034279       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1121 23:47:32.039494       1 config.go:309] "Starting node config controller"
	I1121 23:47:32.039506       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1121 23:47:32.039512       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1121 23:47:32.134526       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1121 23:47:32.134549       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1121 23:47:32.134580       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [8d89e7dd43a03bbcb90d35b0084800be3d656bfa6da9cea1a1cc88c7dc51493d] <==
	E1121 23:47:22.475530       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1121 23:47:22.475591       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1121 23:47:22.475644       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1121 23:47:22.475674       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1121 23:47:22.475781       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1121 23:47:22.475833       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1121 23:47:22.475877       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1121 23:47:22.476028       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1121 23:47:22.476096       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1121 23:47:23.318227       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1121 23:47:23.496497       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1121 23:47:23.525267       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1121 23:47:23.575530       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1121 23:47:23.578656       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1121 23:47:23.593013       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1121 23:47:23.593144       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1121 23:47:23.685009       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1121 23:47:23.695610       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1121 23:47:23.719024       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1121 23:47:23.735984       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1121 23:47:23.781311       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1121 23:47:23.797047       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1121 23:47:23.818758       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1121 23:47:23.836424       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I1121 23:47:26.255559       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 21 23:53:24 addons-266876 kubelet[1502]: E1121 23:53:24.624301    1502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\\\": ErrImagePull: reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-b9a0f343-b333-4fad-87d0-620f3a86218a" podUID="5b56ac87-ee47-4db4-9910-2c199e439aec"
	Nov 21 23:53:25 addons-266876 kubelet[1502]: E1121 23:53:25.886288    1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763769205885509053  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:532066}  inodes_used:{value:186}}"
	Nov 21 23:53:25 addons-266876 kubelet[1502]: E1121 23:53:25.886866    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763769205885509053  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:532066}  inodes_used:{value:186}}"
	Nov 21 23:53:35 addons-266876 kubelet[1502]: E1121 23:53:35.889886    1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763769215889297560  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:532066}  inodes_used:{value:186}}"
	Nov 21 23:53:35 addons-266876 kubelet[1502]: E1121 23:53:35.889910    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763769215889297560  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:532066}  inodes_used:{value:186}}"
	Nov 21 23:53:45 addons-266876 kubelet[1502]: E1121 23:53:45.895445    1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763769225894812929  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:532066}  inodes_used:{value:186}}"
	Nov 21 23:53:45 addons-266876 kubelet[1502]: E1121 23:53:45.895793    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763769225894812929  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:532066}  inodes_used:{value:186}}"
	Nov 21 23:53:54 addons-266876 kubelet[1502]: E1121 23:53:54.261527    1502 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Nov 21 23:53:54 addons-266876 kubelet[1502]: E1121 23:53:54.261612    1502 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Nov 21 23:53:54 addons-266876 kubelet[1502]: E1121 23:53:54.261896    1502 kuberuntime_manager.go:1449] "Unhandled Error" err="container task-pv-container start failed in pod task-pv-pod_default(484e38f0-cbc8-4850-8360-07b1ea3e62a0): ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Nov 21 23:53:54 addons-266876 kubelet[1502]: E1121 23:53:54.261991    1502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ErrImagePull: \"reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="484e38f0-cbc8-4850-8360-07b1ea3e62a0"
	Nov 21 23:53:55 addons-266876 kubelet[1502]: E1121 23:53:55.903198    1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763769235901705892  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:532066}  inodes_used:{value:186}}"
	Nov 21 23:53:55 addons-266876 kubelet[1502]: E1121 23:53:55.903244    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763769235901705892  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:532066}  inodes_used:{value:186}}"
	Nov 21 23:53:58 addons-266876 kubelet[1502]: I1121 23:53:58.399825    1502 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-pd4sx" secret="" err="secret \"gcp-auth\" not found"
	Nov 21 23:53:59 addons-266876 kubelet[1502]: I1121 23:53:59.400507    1502 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Nov 21 23:54:05 addons-266876 kubelet[1502]: E1121 23:54:05.907778    1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763769245907159652  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:532066}  inodes_used:{value:186}}"
	Nov 21 23:54:05 addons-266876 kubelet[1502]: E1121 23:54:05.908236    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763769245907159652  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:532066}  inodes_used:{value:186}}"
	Nov 21 23:54:07 addons-266876 kubelet[1502]: E1121 23:54:07.401185    1502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="484e38f0-cbc8-4850-8360-07b1ea3e62a0"
	Nov 21 23:54:15 addons-266876 kubelet[1502]: E1121 23:54:15.911399    1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763769255910869236  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:532066}  inodes_used:{value:186}}"
	Nov 21 23:54:15 addons-266876 kubelet[1502]: E1121 23:54:15.911444    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763769255910869236  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:532066}  inodes_used:{value:186}}"
	Nov 21 23:54:21 addons-266876 kubelet[1502]: E1121 23:54:21.401098    1502 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="484e38f0-cbc8-4850-8360-07b1ea3e62a0"
	Nov 21 23:54:25 addons-266876 kubelet[1502]: E1121 23:54:25.915173    1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763769265914497687  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:532066}  inodes_used:{value:186}}"
	Nov 21 23:54:25 addons-266876 kubelet[1502]: E1121 23:54:25.915198    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763769265914497687  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:532066}  inodes_used:{value:186}}"
	Nov 21 23:54:35 addons-266876 kubelet[1502]: E1121 23:54:35.918006    1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763769275917580055  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:532066}  inodes_used:{value:186}}"
	Nov 21 23:54:35 addons-266876 kubelet[1502]: E1121 23:54:35.918313    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763769275917580055  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:532066}  inodes_used:{value:186}}"
	
	
	==> storage-provisioner [62fac18e2a4ec25dc356850c017cf9e035abf1bb8a3d580cf0053ef797061409] <==
	W1121 23:54:15.727654       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:54:17.731598       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:54:17.737631       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:54:19.742378       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:54:19.750604       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:54:21.755105       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:54:21.759678       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:54:23.763051       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:54:23.770197       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:54:25.774235       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:54:25.781048       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:54:27.785828       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:54:27.795026       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:54:29.799204       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:54:29.805027       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:54:31.808884       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:54:31.813840       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:54:33.818303       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:54:33.826721       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:54:35.831139       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:54:35.837828       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:54:37.842253       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:54:37.848042       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:54:39.856064       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:54:39.868560       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-266876 -n addons-266876
helpers_test.go:269: (dbg) Run:  kubectl --context addons-266876 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-sqvxb task-pv-pod test-local-path helper-pod-create-pvc-b9a0f343-b333-4fad-87d0-620f3a86218a
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/LocalPath]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-266876 describe pod hello-world-app-5d498dc89-sqvxb task-pv-pod test-local-path helper-pod-create-pvc-b9a0f343-b333-4fad-87d0-620f3a86218a
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-266876 describe pod hello-world-app-5d498dc89-sqvxb task-pv-pod test-local-path helper-pod-create-pvc-b9a0f343-b333-4fad-87d0-620f3a86218a: exit status 1 (95.002806ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-sqvxb
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-266876/192.168.39.50
	Start Time:       Fri, 21 Nov 2025 23:51:44 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.30
	IPs:
	  IP:           10.244.0.30
	Controlled By:  ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dhdwl (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-dhdwl:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  2m56s                default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-sqvxb to addons-266876
	  Warning  Failed     106s                 kubelet            Failed to pull image "docker.io/kicbase/echo-server:1.0": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     106s                 kubelet            Error: ErrImagePull
	  Normal   BackOff    106s                 kubelet            Back-off pulling image "docker.io/kicbase/echo-server:1.0"
	  Warning  Failed     106s                 kubelet            Error: ImagePullBackOff
	  Normal   Pulling    91s (x2 over 2m56s)  kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-266876/192.168.39.50
	Start Time:       Fri, 21 Nov 2025 23:49:41 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.29
	IPs:
	  IP:  10.244.0.29
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP (http-server)
	    Host Port:      0/TCP (http-server)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cj5dd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-cj5dd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  4m59s                default-scheduler  Successfully assigned default/task-pv-pod to addons-266876
	  Warning  Failed     3m31s                kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     46s (x3 over 3m31s)  kubelet            Error: ErrImagePull
	  Warning  Failed     46s (x2 over 2m31s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    19s (x4 over 3m31s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     19s (x4 over 3m31s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    4s (x4 over 4m59s)   kubelet            Pulling image "docker.io/nginx"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-24fvr (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-24fvr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "helper-pod-create-pvc-b9a0f343-b333-4fad-87d0-620f3a86218a" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-266876 describe pod hello-world-app-5d498dc89-sqvxb task-pv-pod test-local-path helper-pod-create-pvc-b9a0f343-b333-4fad-87d0-620f3a86218a: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-266876 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- FAIL: TestAddons/parallel/LocalPath (302.53s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (302.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-783762 --alsologtostderr -v=1]
E1122 00:03:58.474077  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:04:26.188951  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-783762 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-783762 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-783762 --alsologtostderr -v=1] stderr:
I1122 00:03:56.994611  260668 out.go:360] Setting OutFile to fd 1 ...
I1122 00:03:56.994735  260668 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1122 00:03:56.994745  260668 out.go:374] Setting ErrFile to fd 2...
I1122 00:03:56.994749  260668 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1122 00:03:56.994970  260668 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-244751/.minikube/bin
I1122 00:03:56.995240  260668 mustload.go:66] Loading cluster: functional-783762
I1122 00:03:56.995603  260668 config.go:182] Loaded profile config "functional-783762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1122 00:03:56.997406  260668 host.go:66] Checking if "functional-783762" exists ...
I1122 00:03:56.997617  260668 api_server.go:166] Checking apiserver status ...
I1122 00:03:56.997661  260668 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1122 00:03:57.000032  260668 main.go:143] libmachine: domain functional-783762 has defined MAC address 52:54:00:34:a6:b6 in network mk-functional-783762
I1122 00:03:57.000546  260668 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:34:a6:b6", ip: ""} in network mk-functional-783762: {Iface:virbr1 ExpiryTime:2025-11-22 00:59:48 +0000 UTC Type:0 Mac:52:54:00:34:a6:b6 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:functional-783762 Clientid:01:52:54:00:34:a6:b6}
I1122 00:03:57.000574  260668 main.go:143] libmachine: domain functional-783762 has defined IP address 192.168.39.76 and MAC address 52:54:00:34:a6:b6 in network mk-functional-783762
I1122 00:03:57.000810  260668 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/functional-783762/id_rsa Username:docker}
I1122 00:03:57.104212  260668 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/7078/cgroup
W1122 00:03:57.120060  260668 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/7078/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1122 00:03:57.120119  260668 ssh_runner.go:195] Run: ls
I1122 00:03:57.127749  260668 api_server.go:253] Checking apiserver healthz at https://192.168.39.76:8441/healthz ...
I1122 00:03:57.133818  260668 api_server.go:279] https://192.168.39.76:8441/healthz returned 200:
ok
W1122 00:03:57.133877  260668 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1122 00:03:57.134038  260668 config.go:182] Loaded profile config "functional-783762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1122 00:03:57.134049  260668 addons.go:70] Setting dashboard=true in profile "functional-783762"
I1122 00:03:57.134055  260668 addons.go:239] Setting addon dashboard=true in "functional-783762"
I1122 00:03:57.134085  260668 host.go:66] Checking if "functional-783762" exists ...
I1122 00:03:57.138221  260668 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1122 00:03:57.139863  260668 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I1122 00:03:57.141171  260668 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1122 00:03:57.141195  260668 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1122 00:03:57.144148  260668 main.go:143] libmachine: domain functional-783762 has defined MAC address 52:54:00:34:a6:b6 in network mk-functional-783762
I1122 00:03:57.144588  260668 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:34:a6:b6", ip: ""} in network mk-functional-783762: {Iface:virbr1 ExpiryTime:2025-11-22 00:59:48 +0000 UTC Type:0 Mac:52:54:00:34:a6:b6 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:functional-783762 Clientid:01:52:54:00:34:a6:b6}
I1122 00:03:57.144614  260668 main.go:143] libmachine: domain functional-783762 has defined IP address 192.168.39.76 and MAC address 52:54:00:34:a6:b6 in network mk-functional-783762
I1122 00:03:57.144800  260668 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/functional-783762/id_rsa Username:docker}
I1122 00:03:57.250272  260668 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1122 00:03:57.250306  260668 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1122 00:03:57.275190  260668 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1122 00:03:57.275227  260668 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1122 00:03:57.303934  260668 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1122 00:03:57.303966  260668 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1122 00:03:57.334835  260668 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1122 00:03:57.334868  260668 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I1122 00:03:57.365377  260668 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
I1122 00:03:57.365404  260668 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I1122 00:03:57.389968  260668 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1122 00:03:57.390003  260668 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1122 00:03:57.415001  260668 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1122 00:03:57.415035  260668 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1122 00:03:57.439968  260668 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1122 00:03:57.440000  260668 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1122 00:03:57.466404  260668 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1122 00:03:57.466437  260668 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1122 00:03:57.492069  260668 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1122 00:03:58.311394  260668 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-783762 addons enable metrics-server

                                                
                                                
I1122 00:03:58.313039  260668 addons.go:202] Writing out "functional-783762" config to set dashboard=true...
W1122 00:03:58.313391  260668 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I1122 00:03:58.314358  260668 kapi.go:59] client config for functional-783762: &rest.Config{Host:"https://192.168.39.76:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21934-244751/.minikube/profiles/functional-783762/client.crt", KeyFile:"/home/jenkins/minikube-integration/21934-244751/.minikube/profiles/functional-783762/client.key", CAFile:"/home/jenkins/minikube-integration/21934-244751/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2814ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1122 00:03:58.315097  260668 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1122 00:03:58.315125  260668 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1122 00:03:58.315132  260668 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1122 00:03:58.315146  260668 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1122 00:03:58.315156  260668 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1122 00:03:58.329635  260668 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  b8ce42c6-e27f-48a1-9def-bd2938fc8b26 920 0 2025-11-22 00:03:58 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-11-22 00:03:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.106.197.188,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.106.197.188],IPFamilies:[IPv4],AllocateLoadBalan
cerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W1122 00:03:58.329872  260668 out.go:285] * Launching proxy ...
* Launching proxy ...
I1122 00:03:58.330005  260668 dashboard.go:154] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-783762 proxy --port 36195]
I1122 00:03:58.330457  260668 dashboard.go:159] Waiting for kubectl to output host:port ...
I1122 00:03:58.379695  260668 dashboard.go:177] proxy stdout: Starting to serve on 127.0.0.1:36195
W1122 00:03:58.379790  260668 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I1122 00:03:58.394267  260668 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[15862529-12cf-4fdd-b7e8-ca5d09ef4efa] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 22 Nov 2025 00:03:58 GMT]] Body:0xc001731000 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000138280 TLS:<nil>}
I1122 00:03:58.394406  260668 retry.go:31] will retry after 131.381µs: Temporary Error: unexpected response code: 503
I1122 00:03:58.399231  260668 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[fbdd05ee-1caf-4954-978d-7eae823c7d3b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 22 Nov 2025 00:03:58 GMT]] Body:0xc0017310c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000208dc0 TLS:<nil>}
I1122 00:03:58.399292  260668 retry.go:31] will retry after 145.589µs: Temporary Error: unexpected response code: 503
I1122 00:03:58.407371  260668 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[323dafe9-85d2-42a7-8968-f27604f20670] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 22 Nov 2025 00:03:58 GMT]] Body:0xc0008eb400 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000208f00 TLS:<nil>}
I1122 00:03:58.407429  260668 retry.go:31] will retry after 142.02µs: Temporary Error: unexpected response code: 503
I1122 00:03:58.411989  260668 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d345e3a4-57c6-4ce5-bd8e-5eb938c35874] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 22 Nov 2025 00:03:58 GMT]] Body:0xc0017311c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00013f900 TLS:<nil>}
I1122 00:03:58.412058  260668 retry.go:31] will retry after 186.136µs: Temporary Error: unexpected response code: 503
I1122 00:03:58.416380  260668 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[aeacf390-1a45-4e12-8376-eb14eadc530f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 22 Nov 2025 00:03:58 GMT]] Body:0xc001644700 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000209040 TLS:<nil>}
I1122 00:03:58.416471  260668 retry.go:31] will retry after 492.989µs: Temporary Error: unexpected response code: 503
I1122 00:03:58.420919  260668 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[92c4bdaa-12aa-4ae2-9966-bce0b510d54c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 22 Nov 2025 00:03:58 GMT]] Body:0xc0008eb580 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001383c0 TLS:<nil>}
I1122 00:03:58.421021  260668 retry.go:31] will retry after 704.798µs: Temporary Error: unexpected response code: 503
I1122 00:03:58.424924  260668 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c1bf0b9e-52ad-4c11-be28-6df3aad9267d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 22 Nov 2025 00:03:58 GMT]] Body:0xc0017312c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00013fa40 TLS:<nil>}
I1122 00:03:58.425020  260668 retry.go:31] will retry after 1.161915ms: Temporary Error: unexpected response code: 503
I1122 00:03:58.430550  260668 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a396168e-9c39-476b-9dc8-bdfc17b9461d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 22 Nov 2025 00:03:58 GMT]] Body:0xc001644800 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000209180 TLS:<nil>}
I1122 00:03:58.430623  260668 retry.go:31] will retry after 2.063445ms: Temporary Error: unexpected response code: 503
I1122 00:03:58.436092  260668 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7c9f78eb-eb36-4f96-a278-502c1b4661d8] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 22 Nov 2025 00:03:58 GMT]] Body:0xc00154a080 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000138500 TLS:<nil>}
I1122 00:03:58.436160  260668 retry.go:31] will retry after 3.605503ms: Temporary Error: unexpected response code: 503
I1122 00:03:58.443998  260668 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[95e73fc0-c795-4551-aa60-ffbcb50dde80] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 22 Nov 2025 00:03:58 GMT]] Body:0xc0017313c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00037e140 TLS:<nil>}
I1122 00:03:58.444072  260668 retry.go:31] will retry after 3.180078ms: Temporary Error: unexpected response code: 503
I1122 00:03:58.453908  260668 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[82710763-7db0-4ac3-addd-e8d8540ed981] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 22 Nov 2025 00:03:58 GMT]] Body:0xc0008eb640 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000209400 TLS:<nil>}
I1122 00:03:58.453993  260668 retry.go:31] will retry after 3.367406ms: Temporary Error: unexpected response code: 503
I1122 00:03:58.464500  260668 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f75efd59-b90c-4cbf-9175-68519a240803] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 22 Nov 2025 00:03:58 GMT]] Body:0xc00154a1c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00013fb80 TLS:<nil>}
I1122 00:03:58.464592  260668 retry.go:31] will retry after 4.540929ms: Temporary Error: unexpected response code: 503
I1122 00:03:58.474328  260668 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f0c37105-ff88-4588-b465-dc8e4aad63d7] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 22 Nov 2025 00:03:58 GMT]] Body:0xc0008eb740 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00037e280 TLS:<nil>}
I1122 00:03:58.474412  260668 retry.go:31] will retry after 14.786505ms: Temporary Error: unexpected response code: 503
I1122 00:03:58.495177  260668 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0da2c5c4-d8ee-403b-abc5-77eb22ab8050] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 22 Nov 2025 00:03:58 GMT]] Body:0xc00154a2c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00013fcc0 TLS:<nil>}
I1122 00:03:58.495292  260668 retry.go:31] will retry after 20.132273ms: Temporary Error: unexpected response code: 503
I1122 00:03:58.527076  260668 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9cd99f59-8430-42d5-bfff-3eb1acbc7843] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 22 Nov 2025 00:03:58 GMT]] Body:0xc0017314c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00037e3c0 TLS:<nil>}
I1122 00:03:58.527172  260668 retry.go:31] will retry after 32.876541ms: Temporary Error: unexpected response code: 503
I1122 00:03:58.565996  260668 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d5bfd221-bd12-46b4-862b-446846e338ff] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 22 Nov 2025 00:03:58 GMT]] Body:0xc00154a3c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000209540 TLS:<nil>}
I1122 00:03:58.566073  260668 retry.go:31] will retry after 40.774794ms: Temporary Error: unexpected response code: 503
I1122 00:03:58.613828  260668 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[cdd7e4f5-1c19-4f39-9338-010cc88dd6e7] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 22 Nov 2025 00:03:58 GMT]] Body:0xc0008eb840 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00037e500 TLS:<nil>}
I1122 00:03:58.613942  260668 retry.go:31] will retry after 69.736188ms: Temporary Error: unexpected response code: 503
I1122 00:03:58.691864  260668 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[28f1cbbf-10e0-4665-b3b3-cd617d22deff] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 22 Nov 2025 00:03:58 GMT]] Body:0xc00154a4c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00013fe00 TLS:<nil>}
I1122 00:03:58.691951  260668 retry.go:31] will retry after 81.007652ms: Temporary Error: unexpected response code: 503
I1122 00:03:58.780220  260668 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1431a1cc-fa6f-4458-8aa8-1b4e7fe71b25] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 22 Nov 2025 00:03:58 GMT]] Body:0xc0017315c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00037e640 TLS:<nil>}
I1122 00:03:58.780304  260668 retry.go:31] will retry after 136.363604ms: Temporary Error: unexpected response code: 503
I1122 00:03:58.921530  260668 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3a96dbc3-60a5-43a9-a05c-e94960326600] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 22 Nov 2025 00:03:58 GMT]] Body:0xc0008eb940 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000209680 TLS:<nil>}
I1122 00:03:58.921609  260668 retry.go:31] will retry after 224.927871ms: Temporary Error: unexpected response code: 503
I1122 00:03:59.151016  260668 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[bd5f171b-fd89-40bd-83e9-a7d1d5a27b2b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 22 Nov 2025 00:03:59 GMT]] Body:0xc00154a600 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000276000 TLS:<nil>}
I1122 00:03:59.151108  260668 retry.go:31] will retry after 265.312403ms: Temporary Error: unexpected response code: 503
I1122 00:03:59.420244  260668 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[50263154-a6b3-44c7-adda-fbf7facb568c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 22 Nov 2025 00:03:59 GMT]] Body:0xc0017316c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00037ea00 TLS:<nil>}
I1122 00:03:59.420317  260668 retry.go:31] will retry after 619.489841ms: Temporary Error: unexpected response code: 503
I1122 00:04:00.043716  260668 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e97e8a16-d6fe-44be-8cc2-0ca942b64d52] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 22 Nov 2025 00:04:00 GMT]] Body:0xc0008eba40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002097c0 TLS:<nil>}
I1122 00:04:00.043795  260668 retry.go:31] will retry after 881.495374ms: Temporary Error: unexpected response code: 503
I1122 00:04:00.929201  260668 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[961b0786-cf07-4275-a0cb-3a3ea40a860e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 22 Nov 2025 00:04:00 GMT]] Body:0xc0008ebb40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000276140 TLS:<nil>}
I1122 00:04:00.929289  260668 retry.go:31] will retry after 1.297272338s: Temporary Error: unexpected response code: 503
I1122 00:04:02.230527  260668 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[cb27b6d3-adb1-4be5-8dad-10a64fd43316] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 22 Nov 2025 00:04:02 GMT]] Body:0xc0008ebc00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000276280 TLS:<nil>}
I1122 00:04:02.230593  260668 retry.go:31] will retry after 1.388731631s: Temporary Error: unexpected response code: 503
I1122 00:04:03.623665  260668 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[bbd20055-4a5a-46d1-81bd-61e3b0c6a65a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 22 Nov 2025 00:04:03 GMT]] Body:0xc00154a6c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002763c0 TLS:<nil>}
I1122 00:04:03.623753  260668 retry.go:31] will retry after 3.537534906s: Temporary Error: unexpected response code: 503
I1122 00:04:07.165979  260668 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[656687c4-ced0-4260-868d-7d568c7ff4ff] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 22 Nov 2025 00:04:07 GMT]] Body:0xc001731800 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00037eb40 TLS:<nil>}
I1122 00:04:07.166047  260668 retry.go:31] will retry after 4.482355713s: Temporary Error: unexpected response code: 503
I1122 00:04:11.655603  260668 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c16b5a98-89ca-4f89-b7a6-61e49f09ad31] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 22 Nov 2025 00:04:11 GMT]] Body:0xc0008ebd40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00037ec80 TLS:<nil>}
I1122 00:04:11.655690  260668 retry.go:31] will retry after 4.183656355s: Temporary Error: unexpected response code: 503
I1122 00:04:15.847566  260668 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5060570b-825b-4f94-ab41-cb9d13fff36e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 22 Nov 2025 00:04:15 GMT]] Body:0xc0017318c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000276640 TLS:<nil>}
I1122 00:04:15.847640  260668 retry.go:31] will retry after 7.355797312s: Temporary Error: unexpected response code: 503
I1122 00:04:23.207616  260668 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[698970c0-1367-46e1-8b04-38e35bab6956] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 22 Nov 2025 00:04:23 GMT]] Body:0xc00154a800 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000209a40 TLS:<nil>}
I1122 00:04:23.207702  260668 retry.go:31] will retry after 9.716249792s: Temporary Error: unexpected response code: 503
I1122 00:04:32.928787  260668 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2911d801-bdb4-4afb-8511-c0b3c3f03043] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 22 Nov 2025 00:04:32 GMT]] Body:0xc00154a900 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00037edc0 TLS:<nil>}
I1122 00:04:32.928867  260668 retry.go:31] will retry after 11.301564708s: Temporary Error: unexpected response code: 503
I1122 00:04:44.236146  260668 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[73078c3f-4e29-44fa-8e1e-3a646dd3d4b0] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 22 Nov 2025 00:04:44 GMT]] Body:0xc00154a9c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000276780 TLS:<nil>}
I1122 00:04:44.236231  260668 retry.go:31] will retry after 40.342331471s: Temporary Error: unexpected response code: 503
I1122 00:05:24.581984  260668 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c1d4f1cc-6bc6-439b-b8a2-38239f9ada29] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 22 Nov 2025 00:05:24 GMT]] Body:0xc0008ebe40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00037ef00 TLS:<nil>}
I1122 00:05:24.582063  260668 retry.go:31] will retry after 30.859837652s: Temporary Error: unexpected response code: 503
I1122 00:05:55.447008  260668 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3f82941f-c1bc-4ee4-bca2-7ed5ef7fe963] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 22 Nov 2025 00:05:55 GMT]] Body:0xc001644900 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00037f040 TLS:<nil>}
I1122 00:05:55.447101  260668 retry.go:31] will retry after 35.86117041s: Temporary Error: unexpected response code: 503
I1122 00:06:31.314561  260668 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[75a99bb2-3279-447b-95e4-a01412e41e3d] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 22 Nov 2025 00:06:31 GMT]] Body:0xc001730080 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000138000 TLS:<nil>}
I1122 00:06:31.314638  260668 retry.go:31] will retry after 1m29.723558292s: Temporary Error: unexpected response code: 503
I1122 00:08:01.042771  260668 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[66031fb8-bd97-45cc-858e-21b708b9666e] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 22 Nov 2025 00:08:01 GMT]] Body:0xc001644040 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000138140 TLS:<nil>}
I1122 00:08:01.042872  260668 retry.go:31] will retry after 53.408212235s: Temporary Error: unexpected response code: 503
I1122 00:08:54.455089  260668 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[70063408-1d2e-4816-af3f-a6ee93e4f513] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 22 Nov 2025 00:08:54 GMT]] Body:0xc0008ea0c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000138640 TLS:<nil>}
I1122 00:08:54.455213  260668 retry.go:31] will retry after 1m27.962600304s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-783762 -n functional-783762
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 logs -n 25
E1122 00:08:58.473053  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-783762 logs -n 25: (1.52807038s)
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                        ARGS                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount          │ -p functional-783762 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1089069711/001:/mount3 --alsologtostderr -v=1 │ functional-783762 │ jenkins │ v1.37.0 │ 22 Nov 25 00:03 UTC │                     │
	│ ssh            │ functional-783762 ssh findmnt -T /mount1                                                                           │ functional-783762 │ jenkins │ v1.37.0 │ 22 Nov 25 00:03 UTC │ 22 Nov 25 00:03 UTC │
	│ ssh            │ functional-783762 ssh findmnt -T /mount2                                                                           │ functional-783762 │ jenkins │ v1.37.0 │ 22 Nov 25 00:03 UTC │ 22 Nov 25 00:03 UTC │
	│ ssh            │ functional-783762 ssh findmnt -T /mount3                                                                           │ functional-783762 │ jenkins │ v1.37.0 │ 22 Nov 25 00:03 UTC │ 22 Nov 25 00:03 UTC │
	│ mount          │ -p functional-783762 --kill=true                                                                                   │ functional-783762 │ jenkins │ v1.37.0 │ 22 Nov 25 00:03 UTC │                     │
	│ start          │ -p functional-783762 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio            │ functional-783762 │ jenkins │ v1.37.0 │ 22 Nov 25 00:03 UTC │                     │
	│ start          │ -p functional-783762 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                      │ functional-783762 │ jenkins │ v1.37.0 │ 22 Nov 25 00:03 UTC │                     │
	│ start          │ -p functional-783762 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio            │ functional-783762 │ jenkins │ v1.37.0 │ 22 Nov 25 00:03 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-783762 --alsologtostderr -v=1                                                     │ functional-783762 │ jenkins │ v1.37.0 │ 22 Nov 25 00:03 UTC │                     │
	│ ssh            │ functional-783762 ssh sudo cat /etc/ssl/certs/250664.pem                                                           │ functional-783762 │ jenkins │ v1.37.0 │ 22 Nov 25 00:08 UTC │ 22 Nov 25 00:08 UTC │
	│ ssh            │ functional-783762 ssh sudo cat /usr/share/ca-certificates/250664.pem                                               │ functional-783762 │ jenkins │ v1.37.0 │ 22 Nov 25 00:08 UTC │ 22 Nov 25 00:08 UTC │
	│ ssh            │ functional-783762 ssh sudo cat /etc/ssl/certs/51391683.0                                                           │ functional-783762 │ jenkins │ v1.37.0 │ 22 Nov 25 00:08 UTC │ 22 Nov 25 00:08 UTC │
	│ ssh            │ functional-783762 ssh sudo cat /etc/ssl/certs/2506642.pem                                                          │ functional-783762 │ jenkins │ v1.37.0 │ 22 Nov 25 00:08 UTC │ 22 Nov 25 00:08 UTC │
	│ ssh            │ functional-783762 ssh sudo cat /usr/share/ca-certificates/2506642.pem                                              │ functional-783762 │ jenkins │ v1.37.0 │ 22 Nov 25 00:08 UTC │ 22 Nov 25 00:08 UTC │
	│ ssh            │ functional-783762 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                           │ functional-783762 │ jenkins │ v1.37.0 │ 22 Nov 25 00:08 UTC │ 22 Nov 25 00:08 UTC │
	│ image          │ functional-783762 image ls --format short --alsologtostderr                                                        │ functional-783762 │ jenkins │ v1.37.0 │ 22 Nov 25 00:08 UTC │ 22 Nov 25 00:08 UTC │
	│ image          │ functional-783762 image ls --format yaml --alsologtostderr                                                         │ functional-783762 │ jenkins │ v1.37.0 │ 22 Nov 25 00:08 UTC │ 22 Nov 25 00:08 UTC │
	│ ssh            │ functional-783762 ssh pgrep buildkitd                                                                              │ functional-783762 │ jenkins │ v1.37.0 │ 22 Nov 25 00:08 UTC │                     │
	│ image          │ functional-783762 image build -t localhost/my-image:functional-783762 testdata/build --alsologtostderr             │ functional-783762 │ jenkins │ v1.37.0 │ 22 Nov 25 00:08 UTC │ 22 Nov 25 00:08 UTC │
	│ image          │ functional-783762 image ls                                                                                         │ functional-783762 │ jenkins │ v1.37.0 │ 22 Nov 25 00:08 UTC │ 22 Nov 25 00:08 UTC │
	│ image          │ functional-783762 image ls --format json --alsologtostderr                                                         │ functional-783762 │ jenkins │ v1.37.0 │ 22 Nov 25 00:08 UTC │ 22 Nov 25 00:08 UTC │
	│ image          │ functional-783762 image ls --format table --alsologtostderr                                                        │ functional-783762 │ jenkins │ v1.37.0 │ 22 Nov 25 00:08 UTC │ 22 Nov 25 00:08 UTC │
	│ update-context │ functional-783762 update-context --alsologtostderr -v=2                                                            │ functional-783762 │ jenkins │ v1.37.0 │ 22 Nov 25 00:08 UTC │ 22 Nov 25 00:08 UTC │
	│ update-context │ functional-783762 update-context --alsologtostderr -v=2                                                            │ functional-783762 │ jenkins │ v1.37.0 │ 22 Nov 25 00:08 UTC │ 22 Nov 25 00:08 UTC │
	│ update-context │ functional-783762 update-context --alsologtostderr -v=2                                                            │ functional-783762 │ jenkins │ v1.37.0 │ 22 Nov 25 00:08 UTC │ 22 Nov 25 00:08 UTC │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/22 00:03:56
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1122 00:03:56.869086  260652 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:03:56.869638  260652 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:03:56.869656  260652 out.go:374] Setting ErrFile to fd 2...
	I1122 00:03:56.869662  260652 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:03:56.870298  260652 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-244751/.minikube/bin
	I1122 00:03:56.871056  260652 out.go:368] Setting JSON to false
	I1122 00:03:56.872195  260652 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":27965,"bootTime":1763741872,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1122 00:03:56.872344  260652 start.go:143] virtualization: kvm guest
	I1122 00:03:56.874227  260652 out.go:179] * [functional-783762] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1122 00:03:56.875684  260652 out.go:179]   - MINIKUBE_LOCATION=21934
	I1122 00:03:56.875758  260652 notify.go:221] Checking for updates...
	I1122 00:03:56.878576  260652 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 00:03:56.880113  260652 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-244751/kubeconfig
	I1122 00:03:56.884960  260652 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-244751/.minikube
	I1122 00:03:56.886574  260652 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1122 00:03:56.887970  260652 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 00:03:56.889736  260652 config.go:182] Loaded profile config "functional-783762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:03:56.890303  260652 driver.go:422] Setting default libvirt URI to qemu:///system
	I1122 00:03:56.921947  260652 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1122 00:03:56.923317  260652 start.go:309] selected driver: kvm2
	I1122 00:03:56.923337  260652 start.go:930] validating driver "kvm2" against &{Name:functional-783762 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-783762 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.76 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:03:56.923483  260652 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 00:03:56.925829  260652 out.go:203] 
	W1122 00:03:56.927360  260652 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1122 00:03:56.928728  260652 out.go:203] 
	
	
	==> CRI-O <==
	Nov 22 00:08:57 functional-783762 crio[5816]: time="2025-11-22 00:08:57.753356256Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b8166529-dc8e-4548-846a-afcbf5f8db0b name=/runtime.v1.RuntimeService/ListContainers
	Nov 22 00:08:57 functional-783762 crio[5816]: time="2025-11-22 00:08:57.753436686Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b8166529-dc8e-4548-846a-afcbf5f8db0b name=/runtime.v1.RuntimeService/ListContainers
	Nov 22 00:08:57 functional-783762 crio[5816]: time="2025-11-22 00:08:57.753755664Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:95d552f310077dbb71aa5b22ef7be71d04c3592f8270330f3dc6f8195d61e047,PodSandboxId:2c249bc4e15ab7b56cf51f9078096523a863e77b8bd9e308dd5311716682c55d,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1763769830827400746,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 47d8e8ac-6e19-45c9-a495-6a6b6848a8e4,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26451e4d2e6ccdca80e0db009442a15061a408c0cba565df9e5041bf158049c3,PodSandboxId:93fa5fdbb6fc28f05008e5c01c96f3db881614c1406ddf48130fa76b31fc64be,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1763769732046845788,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-7d85dfc575-cjd8h,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4abfb066-72db-4ce1-8963-25d7bee932a0,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d861ebb3bc7a4b2eef3ae0f5920853815e45e2f5a7c904c15da9579bf1ed8354,PodSandboxId:189d6f9812ba4d12f4fd6992624eeaa7ac80f8f3025d87e553934a3133f6a9a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763769707973235782,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59710b70e8d30db93937d9019eab0a9a,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name
\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:937b2a1e9b101160e7748ede304ccb8d1525f30ab7403c6da8c9e069f1e199d0,PodSandboxId:189d6f9812ba4d12f4fd6992624eeaa7ac80f8f3025d87e553934a3133f6a9a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1763769706921898308,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59710b70e8d30db93937d9019eab0a9a,},A
nnotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb5168f65cd008045da49893b96bed7e32efb8346d66a7d9d40934ae8e2e9d0e,PodSandboxId:5ba6b9ce4a22a38fa5e58daab9cadfb75bbb49799335ea6f577faa2ddad865e0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763769706766851201,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisi
oner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00ac7e70-b6f1-4991-862a-db0ba26baf6c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fa18a4bc25e43ee3b48de6de00a157e952e7608951c944d746541a037c7d494,PodSandboxId:a8234273d863ee2cc98ae5cd8f82dc2da86843090e62e58decc21cdf13c668b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763769706814494400,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4hlw7,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: 5cb72e66-14a4-4209-861c-be0707b73762,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:622d332041e786701191f78a09bf73dd9406c9297fe6831bf955fa085b914c9d,PodSandboxId:df91e6302180a62a7b6581045d8002297595b188ee7a94262ba4105754685773,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e5
61a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763769702074990416,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6a6bb5d624dd9724d6436c47c57eec9,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae7059d8a3af78a4a0c906c515245dc6d4414fc28d580cab2fddc21a5a3e5385,PodSandboxId:0f7dd710594cad5757444e9682b0fa16f3e34d7039a34fd992c885d80b8d25
8e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763769701977969247,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b393547de92689fa32a14bea69079b0,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0c435c405010db72805d64d746d7f
7105f7f12029df399df5284c3e900e2773,PodSandboxId:5bb0fd666e6bd9751e9fdfd576d9503c08d73449309add09c5734a2386ef52a1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763769701785735881,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6cqt7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3381fa89-cdde-43bf-a38c-f140281f28af,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67c5ba38a97236a5f6237635c69b5123297b3dad277fc5793f3173862464829
d,PodSandboxId:8ba98d2cdc1e383d4c406e30f212b0ab0d901dada207b961499db4ed4d1e26df,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763769701716886607,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5017c5ab7e5f7612c69e258534ddf5e4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},}
,&Container{Id:e3cfcf54044a53d7dd88d7db26f1e416d71caaab702bc46737a7be38cccdb604,PodSandboxId:5ba6b9ce4a22a38fa5e58daab9cadfb75bbb49799335ea6f577faa2ddad865e0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1763769701689668484,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00ac7e70-b6f1-4991-862a-db0ba26baf6c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{I
d:c0a6e5bbcecef81643a8b1f25289fb2a355f42d76ed00755be12d759e4376728,PodSandboxId:e4ec1069bc01988ab7cbadc862ca142d62f6817017f947a57a9667444a97f511,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1763769664906347403,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6cqt7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3381fa89-cdde-43bf-a38c-f140281f28af,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04cf4193002456e3aa12568d11c5337a
f3f25e972ef32773b772906e32177b19,PodSandboxId:6ff5789062c7b88d0e5c9915aa01dc4f6932528a6e55ad5e178dfd4d537a1b48,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1763769660274653220,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6a6bb5d624dd9724d6436c47c57eec9,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f35efea65afed5a14e33a117e8d2e86725efb1398a611e9c9ad4b66ff388490c,PodSandboxId:39e5a20696df9463955e23ed29895cc87e900159279313bc5090d724c85ccfdf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1763769660277144916,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b393547de92689fa32a14bea69079b0,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.
container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5219aacb2ecc8f9ab6b7a4db8a50a890fedaa0848f7b8ec907cbcacfc3c6344,PodSandboxId:e9301a0a5805e97cdf0f30e8f35504b79f69bab035854b5cb6172b3633607a5d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1763769660245120828,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5017c5ab7e5f7612c69e258534ddf5e4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\
"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2853482cd778dbcfedab3ca5c70c06d4b54b2ac79c80e85fa4d6427fb51d475f,PodSandboxId:a513bd37a85301e4f5b8ed3fdfc3c5d95ac0b6eb70bae7f406c98f01b2570972,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1763769643913271123,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4hlw7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cb72e66-14a4-4209-861c-be0707b73762,},Annotations:map[string]string{io.kuber
netes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b8166529-dc8e-4548-846a-afcbf5f8db0b name=/runtime.v1.RuntimeService/ListContainers
	Nov 22 00:08:57 functional-783762 crio[5816]: time="2025-11-22 00:08:57.801363990Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c1ec5858-d55b-4bf9-a195-3e91bb70465c name=/runtime.v1.RuntimeService/Version
	Nov 22 00:08:57 functional-783762 crio[5816]: time="2025-11-22 00:08:57.801459843Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c1ec5858-d55b-4bf9-a195-3e91bb70465c name=/runtime.v1.RuntimeService/Version
	Nov 22 00:08:57 functional-783762 crio[5816]: time="2025-11-22 00:08:57.803138593Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=82627935-5b71-47ab-9039-8ee91c0a3136 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 22 00:08:57 functional-783762 crio[5816]: time="2025-11-22 00:08:57.803839341Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763770137803814536,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:203237,},InodesUsed:&UInt64Value{Value:105,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=82627935-5b71-47ab-9039-8ee91c0a3136 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 22 00:08:57 functional-783762 crio[5816]: time="2025-11-22 00:08:57.804836549Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b5230790-16ab-4e8d-a417-c3033c673faa name=/runtime.v1.RuntimeService/ListContainers
	Nov 22 00:08:57 functional-783762 crio[5816]: time="2025-11-22 00:08:57.804909772Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b5230790-16ab-4e8d-a417-c3033c673faa name=/runtime.v1.RuntimeService/ListContainers
	Nov 22 00:08:57 functional-783762 crio[5816]: time="2025-11-22 00:08:57.805310165Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:95d552f310077dbb71aa5b22ef7be71d04c3592f8270330f3dc6f8195d61e047,PodSandboxId:2c249bc4e15ab7b56cf51f9078096523a863e77b8bd9e308dd5311716682c55d,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1763769830827400746,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 47d8e8ac-6e19-45c9-a495-6a6b6848a8e4,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26451e4d2e6ccdca80e0db009442a15061a408c0cba565df9e5041bf158049c3,PodSandboxId:93fa5fdbb6fc28f05008e5c01c96f3db881614c1406ddf48130fa76b31fc64be,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1763769732046845788,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-7d85dfc575-cjd8h,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4abfb066-72db-4ce1-8963-25d7bee932a0,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d861ebb3bc7a4b2eef3ae0f5920853815e45e2f5a7c904c15da9579bf1ed8354,PodSandboxId:189d6f9812ba4d12f4fd6992624eeaa7ac80f8f3025d87e553934a3133f6a9a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763769707973235782,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59710b70e8d30db93937d9019eab0a9a,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name
\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:937b2a1e9b101160e7748ede304ccb8d1525f30ab7403c6da8c9e069f1e199d0,PodSandboxId:189d6f9812ba4d12f4fd6992624eeaa7ac80f8f3025d87e553934a3133f6a9a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1763769706921898308,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59710b70e8d30db93937d9019eab0a9a,},A
nnotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb5168f65cd008045da49893b96bed7e32efb8346d66a7d9d40934ae8e2e9d0e,PodSandboxId:5ba6b9ce4a22a38fa5e58daab9cadfb75bbb49799335ea6f577faa2ddad865e0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763769706766851201,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisi
oner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00ac7e70-b6f1-4991-862a-db0ba26baf6c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fa18a4bc25e43ee3b48de6de00a157e952e7608951c944d746541a037c7d494,PodSandboxId:a8234273d863ee2cc98ae5cd8f82dc2da86843090e62e58decc21cdf13c668b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763769706814494400,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4hlw7,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: 5cb72e66-14a4-4209-861c-be0707b73762,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:622d332041e786701191f78a09bf73dd9406c9297fe6831bf955fa085b914c9d,PodSandboxId:df91e6302180a62a7b6581045d8002297595b188ee7a94262ba4105754685773,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e5
61a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763769702074990416,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6a6bb5d624dd9724d6436c47c57eec9,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae7059d8a3af78a4a0c906c515245dc6d4414fc28d580cab2fddc21a5a3e5385,PodSandboxId:0f7dd710594cad5757444e9682b0fa16f3e34d7039a34fd992c885d80b8d25
8e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763769701977969247,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b393547de92689fa32a14bea69079b0,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0c435c405010db72805d64d746d7f
7105f7f12029df399df5284c3e900e2773,PodSandboxId:5bb0fd666e6bd9751e9fdfd576d9503c08d73449309add09c5734a2386ef52a1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763769701785735881,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6cqt7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3381fa89-cdde-43bf-a38c-f140281f28af,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67c5ba38a97236a5f6237635c69b5123297b3dad277fc5793f3173862464829
d,PodSandboxId:8ba98d2cdc1e383d4c406e30f212b0ab0d901dada207b961499db4ed4d1e26df,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763769701716886607,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5017c5ab7e5f7612c69e258534ddf5e4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},}
,&Container{Id:e3cfcf54044a53d7dd88d7db26f1e416d71caaab702bc46737a7be38cccdb604,PodSandboxId:5ba6b9ce4a22a38fa5e58daab9cadfb75bbb49799335ea6f577faa2ddad865e0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1763769701689668484,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00ac7e70-b6f1-4991-862a-db0ba26baf6c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{I
d:c0a6e5bbcecef81643a8b1f25289fb2a355f42d76ed00755be12d759e4376728,PodSandboxId:e4ec1069bc01988ab7cbadc862ca142d62f6817017f947a57a9667444a97f511,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1763769664906347403,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6cqt7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3381fa89-cdde-43bf-a38c-f140281f28af,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04cf4193002456e3aa12568d11c5337a
f3f25e972ef32773b772906e32177b19,PodSandboxId:6ff5789062c7b88d0e5c9915aa01dc4f6932528a6e55ad5e178dfd4d537a1b48,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1763769660274653220,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6a6bb5d624dd9724d6436c47c57eec9,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f35efea65afed5a14e33a117e8d2e86725efb1398a611e9c9ad4b66ff388490c,PodSandboxId:39e5a20696df9463955e23ed29895cc87e900159279313bc5090d724c85ccfdf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1763769660277144916,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b393547de92689fa32a14bea69079b0,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.
container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5219aacb2ecc8f9ab6b7a4db8a50a890fedaa0848f7b8ec907cbcacfc3c6344,PodSandboxId:e9301a0a5805e97cdf0f30e8f35504b79f69bab035854b5cb6172b3633607a5d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1763769660245120828,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5017c5ab7e5f7612c69e258534ddf5e4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\
"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2853482cd778dbcfedab3ca5c70c06d4b54b2ac79c80e85fa4d6427fb51d475f,PodSandboxId:a513bd37a85301e4f5b8ed3fdfc3c5d95ac0b6eb70bae7f406c98f01b2570972,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1763769643913271123,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4hlw7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cb72e66-14a4-4209-861c-be0707b73762,},Annotations:map[string]string{io.kuber
netes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b5230790-16ab-4e8d-a417-c3033c673faa name=/runtime.v1.RuntimeService/ListContainers
	Nov 22 00:08:57 functional-783762 crio[5816]: time="2025-11-22 00:08:57.841890398Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a69cd5c9-4d9e-4a94-91b3-343008e46b28 name=/runtime.v1.RuntimeService/Version
	Nov 22 00:08:57 functional-783762 crio[5816]: time="2025-11-22 00:08:57.842423915Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a69cd5c9-4d9e-4a94-91b3-343008e46b28 name=/runtime.v1.RuntimeService/Version
	Nov 22 00:08:57 functional-783762 crio[5816]: time="2025-11-22 00:08:57.843977989Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=87334320-758f-4134-abbb-7949d8bafaba name=/runtime.v1.ImageService/ImageFsInfo
	Nov 22 00:08:57 functional-783762 crio[5816]: time="2025-11-22 00:08:57.845084419Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763770137845003102,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:203237,},InodesUsed:&UInt64Value{Value:105,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=87334320-758f-4134-abbb-7949d8bafaba name=/runtime.v1.ImageService/ImageFsInfo
	Nov 22 00:08:57 functional-783762 crio[5816]: time="2025-11-22 00:08:57.846505271Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8049a58d-71a5-410f-8777-0be14f65c974 name=/runtime.v1.RuntimeService/ListContainers
	Nov 22 00:08:57 functional-783762 crio[5816]: time="2025-11-22 00:08:57.846842622Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8049a58d-71a5-410f-8777-0be14f65c974 name=/runtime.v1.RuntimeService/ListContainers
	Nov 22 00:08:57 functional-783762 crio[5816]: time="2025-11-22 00:08:57.847294993Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:95d552f310077dbb71aa5b22ef7be71d04c3592f8270330f3dc6f8195d61e047,PodSandboxId:2c249bc4e15ab7b56cf51f9078096523a863e77b8bd9e308dd5311716682c55d,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1763769830827400746,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 47d8e8ac-6e19-45c9-a495-6a6b6848a8e4,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26451e4d2e6ccdca80e0db009442a15061a408c0cba565df9e5041bf158049c3,PodSandboxId:93fa5fdbb6fc28f05008e5c01c96f3db881614c1406ddf48130fa76b31fc64be,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1763769732046845788,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-7d85dfc575-cjd8h,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4abfb066-72db-4ce1-8963-25d7bee932a0,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d861ebb3bc7a4b2eef3ae0f5920853815e45e2f5a7c904c15da9579bf1ed8354,PodSandboxId:189d6f9812ba4d12f4fd6992624eeaa7ac80f8f3025d87e553934a3133f6a9a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763769707973235782,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59710b70e8d30db93937d9019eab0a9a,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name
\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:937b2a1e9b101160e7748ede304ccb8d1525f30ab7403c6da8c9e069f1e199d0,PodSandboxId:189d6f9812ba4d12f4fd6992624eeaa7ac80f8f3025d87e553934a3133f6a9a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1763769706921898308,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59710b70e8d30db93937d9019eab0a9a,},A
nnotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb5168f65cd008045da49893b96bed7e32efb8346d66a7d9d40934ae8e2e9d0e,PodSandboxId:5ba6b9ce4a22a38fa5e58daab9cadfb75bbb49799335ea6f577faa2ddad865e0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763769706766851201,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisi
oner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00ac7e70-b6f1-4991-862a-db0ba26baf6c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fa18a4bc25e43ee3b48de6de00a157e952e7608951c944d746541a037c7d494,PodSandboxId:a8234273d863ee2cc98ae5cd8f82dc2da86843090e62e58decc21cdf13c668b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763769706814494400,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4hlw7,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: 5cb72e66-14a4-4209-861c-be0707b73762,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:622d332041e786701191f78a09bf73dd9406c9297fe6831bf955fa085b914c9d,PodSandboxId:df91e6302180a62a7b6581045d8002297595b188ee7a94262ba4105754685773,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e5
61a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763769702074990416,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6a6bb5d624dd9724d6436c47c57eec9,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae7059d8a3af78a4a0c906c515245dc6d4414fc28d580cab2fddc21a5a3e5385,PodSandboxId:0f7dd710594cad5757444e9682b0fa16f3e34d7039a34fd992c885d80b8d25
8e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763769701977969247,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b393547de92689fa32a14bea69079b0,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0c435c405010db72805d64d746d7f
7105f7f12029df399df5284c3e900e2773,PodSandboxId:5bb0fd666e6bd9751e9fdfd576d9503c08d73449309add09c5734a2386ef52a1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763769701785735881,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6cqt7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3381fa89-cdde-43bf-a38c-f140281f28af,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67c5ba38a97236a5f6237635c69b5123297b3dad277fc5793f3173862464829
d,PodSandboxId:8ba98d2cdc1e383d4c406e30f212b0ab0d901dada207b961499db4ed4d1e26df,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763769701716886607,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5017c5ab7e5f7612c69e258534ddf5e4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},}
,&Container{Id:e3cfcf54044a53d7dd88d7db26f1e416d71caaab702bc46737a7be38cccdb604,PodSandboxId:5ba6b9ce4a22a38fa5e58daab9cadfb75bbb49799335ea6f577faa2ddad865e0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1763769701689668484,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00ac7e70-b6f1-4991-862a-db0ba26baf6c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{I
d:c0a6e5bbcecef81643a8b1f25289fb2a355f42d76ed00755be12d759e4376728,PodSandboxId:e4ec1069bc01988ab7cbadc862ca142d62f6817017f947a57a9667444a97f511,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1763769664906347403,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6cqt7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3381fa89-cdde-43bf-a38c-f140281f28af,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04cf4193002456e3aa12568d11c5337a
f3f25e972ef32773b772906e32177b19,PodSandboxId:6ff5789062c7b88d0e5c9915aa01dc4f6932528a6e55ad5e178dfd4d537a1b48,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1763769660274653220,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6a6bb5d624dd9724d6436c47c57eec9,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f35efea65afed5a14e33a117e8d2e86725efb1398a611e9c9ad4b66ff388490c,PodSandboxId:39e5a20696df9463955e23ed29895cc87e900159279313bc5090d724c85ccfdf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1763769660277144916,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b393547de92689fa32a14bea69079b0,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.
container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5219aacb2ecc8f9ab6b7a4db8a50a890fedaa0848f7b8ec907cbcacfc3c6344,PodSandboxId:e9301a0a5805e97cdf0f30e8f35504b79f69bab035854b5cb6172b3633607a5d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1763769660245120828,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5017c5ab7e5f7612c69e258534ddf5e4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\
"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2853482cd778dbcfedab3ca5c70c06d4b54b2ac79c80e85fa4d6427fb51d475f,PodSandboxId:a513bd37a85301e4f5b8ed3fdfc3c5d95ac0b6eb70bae7f406c98f01b2570972,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1763769643913271123,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4hlw7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cb72e66-14a4-4209-861c-be0707b73762,},Annotations:map[string]string{io.kuber
netes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8049a58d-71a5-410f-8777-0be14f65c974 name=/runtime.v1.RuntimeService/ListContainers
	Nov 22 00:08:57 functional-783762 crio[5816]: time="2025-11-22 00:08:57.881535801Z" level=debug msg="GET https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" file="docker/docker_client.go:631"
	Nov 22 00:08:57 functional-783762 crio[5816]: time="2025-11-22 00:08:57.899893124Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=54074ab3-0eee-49db-ac17-5fa667e264c0 name=/runtime.v1.RuntimeService/Version
	Nov 22 00:08:57 functional-783762 crio[5816]: time="2025-11-22 00:08:57.899990952Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=54074ab3-0eee-49db-ac17-5fa667e264c0 name=/runtime.v1.RuntimeService/Version
	Nov 22 00:08:57 functional-783762 crio[5816]: time="2025-11-22 00:08:57.901706861Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=95890802-aefb-4c68-83c7-5217e3607243 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 22 00:08:57 functional-783762 crio[5816]: time="2025-11-22 00:08:57.902710656Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763770137902680700,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:203237,},InodesUsed:&UInt64Value{Value:105,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=95890802-aefb-4c68-83c7-5217e3607243 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 22 00:08:57 functional-783762 crio[5816]: time="2025-11-22 00:08:57.903998287Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5c0b10ed-ff0c-4027-a009-f14a898b36d6 name=/runtime.v1.RuntimeService/ListContainers
	Nov 22 00:08:57 functional-783762 crio[5816]: time="2025-11-22 00:08:57.904209933Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5c0b10ed-ff0c-4027-a009-f14a898b36d6 name=/runtime.v1.RuntimeService/ListContainers
	Nov 22 00:08:57 functional-783762 crio[5816]: time="2025-11-22 00:08:57.904591383Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:95d552f310077dbb71aa5b22ef7be71d04c3592f8270330f3dc6f8195d61e047,PodSandboxId:2c249bc4e15ab7b56cf51f9078096523a863e77b8bd9e308dd5311716682c55d,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1763769830827400746,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 47d8e8ac-6e19-45c9-a495-6a6b6848a8e4,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26451e4d2e6ccdca80e0db009442a15061a408c0cba565df9e5041bf158049c3,PodSandboxId:93fa5fdbb6fc28f05008e5c01c96f3db881614c1406ddf48130fa76b31fc64be,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1763769732046845788,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-7d85dfc575-cjd8h,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4abfb066-72db-4ce1-8963-25d7bee932a0,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d861ebb3bc7a4b2eef3ae0f5920853815e45e2f5a7c904c15da9579bf1ed8354,PodSandboxId:189d6f9812ba4d12f4fd6992624eeaa7ac80f8f3025d87e553934a3133f6a9a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763769707973235782,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59710b70e8d30db93937d9019eab0a9a,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name
\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:937b2a1e9b101160e7748ede304ccb8d1525f30ab7403c6da8c9e069f1e199d0,PodSandboxId:189d6f9812ba4d12f4fd6992624eeaa7ac80f8f3025d87e553934a3133f6a9a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1763769706921898308,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59710b70e8d30db93937d9019eab0a9a,},A
nnotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb5168f65cd008045da49893b96bed7e32efb8346d66a7d9d40934ae8e2e9d0e,PodSandboxId:5ba6b9ce4a22a38fa5e58daab9cadfb75bbb49799335ea6f577faa2ddad865e0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763769706766851201,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisi
oner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00ac7e70-b6f1-4991-862a-db0ba26baf6c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fa18a4bc25e43ee3b48de6de00a157e952e7608951c944d746541a037c7d494,PodSandboxId:a8234273d863ee2cc98ae5cd8f82dc2da86843090e62e58decc21cdf13c668b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763769706814494400,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4hlw7,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: 5cb72e66-14a4-4209-861c-be0707b73762,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:622d332041e786701191f78a09bf73dd9406c9297fe6831bf955fa085b914c9d,PodSandboxId:df91e6302180a62a7b6581045d8002297595b188ee7a94262ba4105754685773,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e5
61a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763769702074990416,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6a6bb5d624dd9724d6436c47c57eec9,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae7059d8a3af78a4a0c906c515245dc6d4414fc28d580cab2fddc21a5a3e5385,PodSandboxId:0f7dd710594cad5757444e9682b0fa16f3e34d7039a34fd992c885d80b8d25
8e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763769701977969247,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b393547de92689fa32a14bea69079b0,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0c435c405010db72805d64d746d7f
7105f7f12029df399df5284c3e900e2773,PodSandboxId:5bb0fd666e6bd9751e9fdfd576d9503c08d73449309add09c5734a2386ef52a1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763769701785735881,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6cqt7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3381fa89-cdde-43bf-a38c-f140281f28af,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67c5ba38a97236a5f6237635c69b5123297b3dad277fc5793f3173862464829
d,PodSandboxId:8ba98d2cdc1e383d4c406e30f212b0ab0d901dada207b961499db4ed4d1e26df,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763769701716886607,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5017c5ab7e5f7612c69e258534ddf5e4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},}
,&Container{Id:e3cfcf54044a53d7dd88d7db26f1e416d71caaab702bc46737a7be38cccdb604,PodSandboxId:5ba6b9ce4a22a38fa5e58daab9cadfb75bbb49799335ea6f577faa2ddad865e0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1763769701689668484,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00ac7e70-b6f1-4991-862a-db0ba26baf6c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{I
d:c0a6e5bbcecef81643a8b1f25289fb2a355f42d76ed00755be12d759e4376728,PodSandboxId:e4ec1069bc01988ab7cbadc862ca142d62f6817017f947a57a9667444a97f511,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1763769664906347403,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6cqt7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3381fa89-cdde-43bf-a38c-f140281f28af,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04cf4193002456e3aa12568d11c5337a
f3f25e972ef32773b772906e32177b19,PodSandboxId:6ff5789062c7b88d0e5c9915aa01dc4f6932528a6e55ad5e178dfd4d537a1b48,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1763769660274653220,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6a6bb5d624dd9724d6436c47c57eec9,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f35efea65afed5a14e33a117e8d2e86725efb1398a611e9c9ad4b66ff388490c,PodSandboxId:39e5a20696df9463955e23ed29895cc87e900159279313bc5090d724c85ccfdf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1763769660277144916,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b393547de92689fa32a14bea69079b0,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.
container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5219aacb2ecc8f9ab6b7a4db8a50a890fedaa0848f7b8ec907cbcacfc3c6344,PodSandboxId:e9301a0a5805e97cdf0f30e8f35504b79f69bab035854b5cb6172b3633607a5d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1763769660245120828,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5017c5ab7e5f7612c69e258534ddf5e4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\
"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2853482cd778dbcfedab3ca5c70c06d4b54b2ac79c80e85fa4d6427fb51d475f,PodSandboxId:a513bd37a85301e4f5b8ed3fdfc3c5d95ac0b6eb70bae7f406c98f01b2570972,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1763769643913271123,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4hlw7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cb72e66-14a4-4209-861c-be0707b73762,},Annotations:map[string]string{io.kuber
netes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5c0b10ed-ff0c-4027-a009-f14a898b36d6 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	95d552f310077       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e     5 minutes ago       Exited              mount-munger              0                   2c249bc4e15ab       busybox-mount                               default
	26451e4d2e6cc       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6   6 minutes ago       Running             echo-server               0                   93fa5fdbb6fc2       hello-node-connect-7d85dfc575-cjd8h         default
	d861ebb3bc7a4       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                        7 minutes ago       Running             kube-apiserver            1                   189d6f9812ba4       kube-apiserver-functional-783762            kube-system
	937b2a1e9b101       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                        7 minutes ago       Exited              kube-apiserver            0                   189d6f9812ba4       kube-apiserver-functional-783762            kube-system
	3fa18a4bc25e4       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                        7 minutes ago       Running             coredns                   2                   a8234273d863e       coredns-66bc5c9577-4hlw7                    kube-system
	bb5168f65cd00       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                        7 minutes ago       Running             storage-provisioner       5                   5ba6b9ce4a22a       storage-provisioner                         kube-system
	622d332041e78       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                        7 minutes ago       Running             kube-controller-manager   3                   df91e6302180a       kube-controller-manager-functional-783762   kube-system
	ae7059d8a3af7       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                        7 minutes ago       Running             kube-scheduler            3                   0f7dd710594ca       kube-scheduler-functional-783762            kube-system
	c0c435c405010       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                        7 minutes ago       Running             kube-proxy                3                   5bb0fd666e6bd       kube-proxy-6cqt7                            kube-system
	67c5ba38a9723       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                        7 minutes ago       Running             etcd                      3                   8ba98d2cdc1e3       etcd-functional-783762                      kube-system
	e3cfcf54044a5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                        7 minutes ago       Exited              storage-provisioner       4                   5ba6b9ce4a22a       storage-provisioner                         kube-system
	c0a6e5bbcecef       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                        7 minutes ago       Exited              kube-proxy                2                   e4ec1069bc019       kube-proxy-6cqt7                            kube-system
	f35efea65afed       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                        7 minutes ago       Exited              kube-scheduler            2                   39e5a20696df9       kube-scheduler-functional-783762            kube-system
	04cf419300245       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                        7 minutes ago       Exited              kube-controller-manager   2                   6ff5789062c7b       kube-controller-manager-functional-783762   kube-system
	b5219aacb2ecc       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                        7 minutes ago       Exited              etcd                      2                   e9301a0a5805e       etcd-functional-783762                      kube-system
	2853482cd778d       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                        8 minutes ago       Exited              coredns                   1                   a513bd37a8530       coredns-66bc5c9577-4hlw7                    kube-system
	
	
	==> coredns [2853482cd778dbcfedab3ca5c70c06d4b54b2ac79c80e85fa4d6427fb51d475f] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56778 - 24880 "HINFO IN 510444860572811029.660604583740510837. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.498605514s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [3fa18a4bc25e43ee3b48de6de00a157e952e7608951c944d746541a037c7d494] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: watch of *v1.Namespace ended with: very short watch: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: Unexpected watch close - watch lasted less than a second and no items received
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: watch of *v1.Service ended with: very short watch: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: Unexpected watch close - watch lasted less than a second and no items received
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: watch of *v1.EndpointSlice ended with: very short watch: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: Unexpected watch close - watch lasted less than a second and no items received
	[INFO] 127.0.0.1:46548 - 19382 "HINFO IN 3244271735892804347.7610011136115581116. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.855078147s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               functional-783762
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-783762
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785
	                    minikube.k8s.io/name=functional-783762
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_22T00_00_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 22 Nov 2025 00:00:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-783762
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 22 Nov 2025 00:08:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 22 Nov 2025 00:08:55 +0000   Sat, 22 Nov 2025 00:00:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 22 Nov 2025 00:08:55 +0000   Sat, 22 Nov 2025 00:00:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 22 Nov 2025 00:08:55 +0000   Sat, 22 Nov 2025 00:00:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 22 Nov 2025 00:08:55 +0000   Sat, 22 Nov 2025 00:00:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.76
	  Hostname:    functional-783762
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001796Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001796Ki
	  pods:               110
	System Info:
	  Machine ID:                 d561c82d24c84e51aed0106657c2085c
	  System UUID:                d561c82d-24c8-4e51-aed0-106657c2085c
	  Boot ID:                    b0cf542f-e0b4-488f-b149-70c03b493ebb
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-2dc5f                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m40s
	  default                     hello-node-connect-7d85dfc575-cjd8h           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m48s
	  default                     mysql-5bb876957f-qhcz2                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    6m47s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m38s
	  kube-system                 coredns-66bc5c9577-4hlw7                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     8m42s
	  kube-system                 etcd-functional-783762                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         8m49s
	  kube-system                 kube-apiserver-functional-783762              250m (12%)    0 (0%)      0 (0%)           0 (0%)         7m8s
	  kube-system                 kube-controller-manager-functional-783762     200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m47s
	  kube-system                 kube-proxy-6cqt7                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m43s
	  kube-system                 kube-scheduler-functional-783762              100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m47s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m40s
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-xc2xc    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-284lt         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m41s                  kube-proxy       
	  Normal  Starting                 7m1s                   kube-proxy       
	  Normal  Starting                 7m52s                  kube-proxy       
	  Normal  Starting                 8m11s                  kube-proxy       
	  Normal  Starting                 8m55s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m54s (x8 over 8m55s)  kubelet          Node functional-783762 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m54s (x8 over 8m55s)  kubelet          Node functional-783762 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m54s (x7 over 8m55s)  kubelet          Node functional-783762 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m54s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 8m47s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m47s                  kubelet          Node functional-783762 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m47s                  kubelet          Node functional-783762 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m47s                  kubelet          Node functional-783762 status is now: NodeHasSufficientPID
	  Normal  NodeReady                8m46s                  kubelet          Node functional-783762 status is now: NodeReady
	  Normal  RegisteredNode           8m43s                  node-controller  Node functional-783762 event: Registered Node functional-783762 in Controller
	  Normal  CIDRAssignmentFailed     8m43s                  cidrAllocator    Node functional-783762 status is now: CIDRAssignmentFailed
	  Normal  NodeAllocatableEnforced  7m59s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m59s (x8 over 7m59s)  kubelet          Node functional-783762 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m59s (x8 over 7m59s)  kubelet          Node functional-783762 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m59s (x7 over 7m59s)  kubelet          Node functional-783762 status is now: NodeHasSufficientPID
	  Normal  Starting                 7m59s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           7m51s                  node-controller  Node functional-783762 event: Registered Node functional-783762 in Controller
	  Normal  Starting                 7m13s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m13s (x3 over 7m13s)  kubelet          Node functional-783762 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m13s (x3 over 7m13s)  kubelet          Node functional-783762 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m13s (x3 over 7m13s)  kubelet          Node functional-783762 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m13s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7m5s                   node-controller  Node functional-783762 event: Registered Node functional-783762 in Controller
	
	
	==> dmesg <==
	[  +0.000013] (rpcbind)[118]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.184426] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.083639] kauditd_printk_skb: 1 callbacks suppressed
	[Nov22 00:00] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.145077] kauditd_printk_skb: 171 callbacks suppressed
	[  +5.340851] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.297709] kauditd_printk_skb: 252 callbacks suppressed
	[  +0.113526] kauditd_printk_skb: 44 callbacks suppressed
	[  +4.759596] kauditd_printk_skb: 56 callbacks suppressed
	[  +4.379699] kauditd_printk_skb: 284 callbacks suppressed
	[Nov22 00:01] kauditd_printk_skb: 98 callbacks suppressed
	[  +7.640718] kauditd_printk_skb: 32 callbacks suppressed
	[  +6.360006] kauditd_printk_skb: 2 callbacks suppressed
	[  +0.113329] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.570529] kauditd_printk_skb: 78 callbacks suppressed
	[  +1.004551] kauditd_printk_skb: 277 callbacks suppressed
	[  +4.444666] kauditd_printk_skb: 70 callbacks suppressed
	[Nov22 00:02] kauditd_printk_skb: 133 callbacks suppressed
	[  +6.671222] kauditd_printk_skb: 53 callbacks suppressed
	[ +19.842391] kauditd_printk_skb: 110 callbacks suppressed
	[Nov22 00:03] kauditd_printk_skb: 25 callbacks suppressed
	[Nov22 00:05] kauditd_printk_skb: 74 callbacks suppressed
	[Nov22 00:08] crun[10000]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	
	
	==> etcd [67c5ba38a97236a5f6237635c69b5123297b3dad277fc5793f3173862464829d] <==
	{"level":"warn","ts":"2025-11-22T00:01:49.532581Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:49.550609Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:49.553213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:49.561695Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:49.570903Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:49.578124Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:49.585459Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:49.598496Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:49.610962Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:49.614359Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:49.625644Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:49.637505Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:49.645274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:49.657386Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:49.668836Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:49.678186Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:49.688728Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:49.698178Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:49.707480Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:49.718525Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:49.733117Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:49.748915Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:49.768596Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:49.775168Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:49.821712Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42830","server-name":"","error":"EOF"}
	
	
	==> etcd [b5219aacb2ecc8f9ab6b7a4db8a50a890fedaa0848f7b8ec907cbcacfc3c6344] <==
	{"level":"warn","ts":"2025-11-22T00:01:02.790222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:02.806118Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:02.847308Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:02.872404Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:02.902639Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:02.922989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:03.026408Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51086","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-22T00:01:27.413794Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-22T00:01:27.413883Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-783762","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.76:2380"],"advertise-client-urls":["https://192.168.39.76:2379"]}
	{"level":"error","ts":"2025-11-22T00:01:27.420370Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-22T00:01:27.493936Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"info","ts":"2025-11-22T00:01:27.494086Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"4f06aa0eaa8889d9","current-leader-member-id":"4f06aa0eaa8889d9"}
	{"level":"info","ts":"2025-11-22T00:01:27.494207Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-22T00:01:27.494218Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"error","ts":"2025-11-22T00:01:27.493995Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-22T00:01:27.494425Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-22T00:01:27.494488Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-22T00:01:27.494495Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-22T00:01:27.494528Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.76:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-22T00:01:27.494535Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.76:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-22T00:01:27.494540Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.76:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-22T00:01:27.497717Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.76:2380"}
	{"level":"error","ts":"2025-11-22T00:01:27.497803Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.76:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-22T00:01:27.497831Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.76:2380"}
	{"level":"info","ts":"2025-11-22T00:01:27.497836Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-783762","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.76:2380"],"advertise-client-urls":["https://192.168.39.76:2379"]}
	
	
	==> kernel <==
	 00:08:58 up 9 min,  0 users,  load average: 0.58, 0.46, 0.31
	Linux functional-783762 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Nov 19 01:10:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [937b2a1e9b101160e7748ede304ccb8d1525f30ab7403c6da8c9e069f1e199d0] <==
	I1122 00:01:47.323261       1 options.go:263] external host was not specified, using 192.168.39.76
	I1122 00:01:47.342461       1 server.go:150] Version: v1.34.1
	I1122 00:01:47.344089       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E1122 00:01:47.350369       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	
	
	==> kube-apiserver [d861ebb3bc7a4b2eef3ae0f5920853815e45e2f5a7c904c15da9579bf1ed8354] <==
	I1122 00:01:50.581637       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1122 00:01:50.584164       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1122 00:01:50.584812       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1122 00:01:50.613117       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1122 00:01:50.617194       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1122 00:01:50.617253       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1122 00:01:50.633429       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1122 00:01:50.633455       1 policy_source.go:240] refreshing policies
	I1122 00:01:50.668250       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1122 00:01:51.389775       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1122 00:01:52.299244       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1122 00:01:52.312669       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1122 00:01:52.363823       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1122 00:01:52.396406       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1122 00:01:52.411780       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1122 00:01:54.192121       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1122 00:01:54.242406       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1122 00:01:54.289696       1 controller.go:667] quota admission added evaluator for: endpoints
	I1122 00:02:05.981512       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.100.230.73"}
	I1122 00:02:10.365289       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.110.229.206"}
	I1122 00:02:11.008541       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.97.112.146"}
	I1122 00:02:18.200764       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.103.69.213"}
	I1122 00:03:57.914693       1 controller.go:667] quota admission added evaluator for: namespaces
	I1122 00:03:58.256928       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.197.188"}
	I1122 00:03:58.288243       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.112.221"}
	
	
	==> kube-controller-manager [04cf4193002456e3aa12568d11c5337af3f25e972ef32773b772906e32177b19] <==
	I1122 00:01:07.169996       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1122 00:01:07.170099       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1122 00:01:07.171872       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1122 00:01:07.172227       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1122 00:01:07.173981       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1122 00:01:07.174640       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1122 00:01:07.180084       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1122 00:01:07.180466       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1122 00:01:07.184965       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1122 00:01:07.184982       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1122 00:01:07.185568       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1122 00:01:07.188346       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1122 00:01:07.191724       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1122 00:01:07.191803       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1122 00:01:07.191883       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-783762"
	I1122 00:01:07.191943       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1122 00:01:07.200160       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1122 00:01:07.206539       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1122 00:01:07.208792       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1122 00:01:07.212867       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1122 00:01:07.212941       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1122 00:01:07.212952       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1122 00:01:07.212957       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1122 00:01:07.215230       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1122 00:01:07.216946       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-controller-manager [622d332041e786701191f78a09bf73dd9406c9297fe6831bf955fa085b914c9d] <==
	I1122 00:01:53.915146       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1122 00:01:53.926745       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1122 00:01:53.927772       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1122 00:01:53.930209       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1122 00:01:53.935999       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1122 00:01:53.936102       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1122 00:01:53.936128       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1122 00:01:53.936196       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1122 00:01:53.937461       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1122 00:01:53.937508       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1122 00:01:53.937509       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1122 00:01:53.938858       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1122 00:01:53.940109       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1122 00:01:53.940123       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1122 00:01:53.946617       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1122 00:01:53.946660       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1122 00:01:53.946667       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	E1122 00:03:58.049870       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1122 00:03:58.062735       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1122 00:03:58.063775       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1122 00:03:58.079287       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1122 00:03:58.079327       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1122 00:03:58.090808       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1122 00:03:58.091966       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1122 00:03:58.101149       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [c0a6e5bbcecef81643a8b1f25289fb2a355f42d76ed00755be12d759e4376728] <==
	I1122 00:01:05.199636       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1122 00:01:05.299976       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1122 00:01:05.300123       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.76"]
	E1122 00:01:05.300255       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1122 00:01:05.355005       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1122 00:01:05.355111       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1122 00:01:05.355133       1 server_linux.go:132] "Using iptables Proxier"
	I1122 00:01:05.368603       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1122 00:01:05.369525       1 server.go:527] "Version info" version="v1.34.1"
	I1122 00:01:05.370098       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:01:05.378679       1 config.go:200] "Starting service config controller"
	I1122 00:01:05.379181       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1122 00:01:05.379281       1 config.go:106] "Starting endpoint slice config controller"
	I1122 00:01:05.379359       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1122 00:01:05.379373       1 config.go:403] "Starting serviceCIDR config controller"
	I1122 00:01:05.379377       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1122 00:01:05.381930       1 config.go:309] "Starting node config controller"
	I1122 00:01:05.382118       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1122 00:01:05.382145       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1122 00:01:05.479858       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1122 00:01:05.479996       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1122 00:01:05.479915       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [c0c435c405010db72805d64d746d7f7105f7f12029df399df5284c3e900e2773] <==
	E1122 00:01:50.503748       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes \"functional-783762\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1122 00:01:56.849670       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1122 00:01:56.849736       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.76"]
	E1122 00:01:56.849823       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1122 00:01:56.891422       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1122 00:01:56.891506       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1122 00:01:56.891532       1 server_linux.go:132] "Using iptables Proxier"
	I1122 00:01:56.902453       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1122 00:01:56.902815       1 server.go:527] "Version info" version="v1.34.1"
	I1122 00:01:56.902851       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:01:56.907904       1 config.go:200] "Starting service config controller"
	I1122 00:01:56.908112       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1122 00:01:56.908180       1 config.go:106] "Starting endpoint slice config controller"
	I1122 00:01:56.908187       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1122 00:01:56.908199       1 config.go:403] "Starting serviceCIDR config controller"
	I1122 00:01:56.908204       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1122 00:01:56.912411       1 config.go:309] "Starting node config controller"
	I1122 00:01:56.912542       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1122 00:01:56.912551       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1122 00:01:57.008980       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1122 00:01:57.009061       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1122 00:01:57.009135       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [ae7059d8a3af78a4a0c906c515245dc6d4414fc28d580cab2fddc21a5a3e5385] <==
	I1122 00:01:46.029707       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1122 00:01:46.029751       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:01:46.032102       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:01:46.032145       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:01:46.032850       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1122 00:01:46.032926       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1122 00:01:46.132551       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1122 00:01:50.422801       1 reflector.go:205] "Failed to watch" err="csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1122 00:01:50.422864       1 reflector.go:205] "Failed to watch" err="resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1122 00:01:50.422881       1 reflector.go:205] "Failed to watch" err="storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1122 00:01:50.422897       1 reflector.go:205] "Failed to watch" err="resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1122 00:01:50.422911       1 reflector.go:205] "Failed to watch" err="persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1122 00:01:50.422922       1 reflector.go:205] "Failed to watch" err="pods is forbidden: User \"system:kube-scheduler\" cannot watch resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1122 00:01:50.422930       1 reflector.go:205] "Failed to watch" err="nodes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1122 00:01:50.422942       1 reflector.go:205] "Failed to watch" err="csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1122 00:01:50.422956       1 reflector.go:205] "Failed to watch" err="volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1122 00:01:50.423168       1 reflector.go:205] "Failed to watch" err="replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1122 00:01:50.423205       1 reflector.go:205] "Failed to watch" err="statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1122 00:01:50.423216       1 reflector.go:205] "Failed to watch" err="namespaces is forbidden: User \"system:kube-scheduler\" cannot watch resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1122 00:01:50.423229       1 reflector.go:205] "Failed to watch" err="poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot watch resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1122 00:01:50.423244       1 reflector.go:205] "Failed to watch" err="services is forbidden: User \"system:kube-scheduler\" cannot watch resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1122 00:01:50.433368       1 reflector.go:205] "Failed to watch" err="csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1122 00:01:50.433490       1 reflector.go:205] "Failed to watch" err="replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1122 00:01:50.433535       1 reflector.go:205] "Failed to watch" err="persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1122 00:01:50.433558       1 reflector.go:205] "Failed to watch" err="deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	
	
	==> kube-scheduler [f35efea65afed5a14e33a117e8d2e86725efb1398a611e9c9ad4b66ff388490c] <==
	I1122 00:01:01.732896       1 serving.go:386] Generated self-signed cert in-memory
	W1122 00:01:03.754650       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1122 00:01:03.754673       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1122 00:01:03.754681       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1122 00:01:03.754686       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1122 00:01:03.832520       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1122 00:01:03.834121       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:01:03.840611       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1122 00:01:03.840695       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:01:03.841229       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:01:03.840710       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1122 00:01:03.949210       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:01:27.426479       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1122 00:01:27.426592       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1122 00:01:27.426611       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1122 00:01:27.426632       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:01:27.426820       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1122 00:01:27.426855       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Nov 22 00:08:05 functional-783762 kubelet[6792]: E1122 00:08:05.991453    6792 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763770085990707480  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177577}  inodes_used:{value:89}}"
	Nov 22 00:08:06 functional-783762 kubelet[6792]: E1122 00:08:06.545789    6792 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-qhcz2" podUID="bad1e5d7-de2e-4b08-b406-2e87deac3a9c"
	Nov 22 00:08:10 functional-783762 kubelet[6792]: E1122 00:08:10.542439    6792 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-2dc5f" podUID="db6d5163-56fa-41b6-9f55-3ebb021aa3c3"
	Nov 22 00:08:15 functional-783762 kubelet[6792]: E1122 00:08:15.994093    6792 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763770095993530959  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177577}  inodes_used:{value:89}}"
	Nov 22 00:08:15 functional-783762 kubelet[6792]: E1122 00:08:15.994124    6792 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763770095993530959  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177577}  inodes_used:{value:89}}"
	Nov 22 00:08:22 functional-783762 kubelet[6792]: E1122 00:08:22.544717    6792 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-2dc5f" podUID="db6d5163-56fa-41b6-9f55-3ebb021aa3c3"
	Nov 22 00:08:25 functional-783762 kubelet[6792]: E1122 00:08:25.996768    6792 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763770105996451628  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177577}  inodes_used:{value:89}}"
	Nov 22 00:08:25 functional-783762 kubelet[6792]: E1122 00:08:25.996812    6792 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763770105996451628  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177577}  inodes_used:{value:89}}"
	Nov 22 00:08:27 functional-783762 kubelet[6792]: E1122 00:08:27.794527    6792 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Nov 22 00:08:27 functional-783762 kubelet[6792]: E1122 00:08:27.794609    6792 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Nov 22 00:08:27 functional-783762 kubelet[6792]: E1122 00:08:27.794995    6792 kuberuntime_manager.go:1449] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-77bf4d6c4c-xc2xc_kubernetes-dashboard(8328e596-dc9b-4609-b515-a53fe2575073): ErrImagePull: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Nov 22 00:08:27 functional-783762 kubelet[6792]: E1122 00:08:27.795230    6792 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-xc2xc" podUID="8328e596-dc9b-4609-b515-a53fe2575073"
	Nov 22 00:08:35 functional-783762 kubelet[6792]: E1122 00:08:35.998856    6792 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763770115998350397  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:203237}  inodes_used:{value:105}}"
	Nov 22 00:08:35 functional-783762 kubelet[6792]: E1122 00:08:35.998927    6792 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763770115998350397  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:203237}  inodes_used:{value:105}}"
	Nov 22 00:08:37 functional-783762 kubelet[6792]: E1122 00:08:37.543684    6792 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-2dc5f" podUID="db6d5163-56fa-41b6-9f55-3ebb021aa3c3"
	Nov 22 00:08:42 functional-783762 kubelet[6792]: E1122 00:08:42.545956    6792 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-xc2xc" podUID="8328e596-dc9b-4609-b515-a53fe2575073"
	Nov 22 00:08:45 functional-783762 kubelet[6792]: E1122 00:08:45.845644    6792 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod5017c5ab7e5f7612c69e258534ddf5e4/crio-e9301a0a5805e97cdf0f30e8f35504b79f69bab035854b5cb6172b3633607a5d: Error finding container e9301a0a5805e97cdf0f30e8f35504b79f69bab035854b5cb6172b3633607a5d: Status 404 returned error can't find the container with id e9301a0a5805e97cdf0f30e8f35504b79f69bab035854b5cb6172b3633607a5d
	Nov 22 00:08:45 functional-783762 kubelet[6792]: E1122 00:08:45.845952    6792 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod9b393547de92689fa32a14bea69079b0/crio-39e5a20696df9463955e23ed29895cc87e900159279313bc5090d724c85ccfdf: Error finding container 39e5a20696df9463955e23ed29895cc87e900159279313bc5090d724c85ccfdf: Status 404 returned error can't find the container with id 39e5a20696df9463955e23ed29895cc87e900159279313bc5090d724c85ccfdf
	Nov 22 00:08:45 functional-783762 kubelet[6792]: E1122 00:08:45.846866    6792 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod3381fa89-cdde-43bf-a38c-f140281f28af/crio-e4ec1069bc01988ab7cbadc862ca142d62f6817017f947a57a9667444a97f511: Error finding container e4ec1069bc01988ab7cbadc862ca142d62f6817017f947a57a9667444a97f511: Status 404 returned error can't find the container with id e4ec1069bc01988ab7cbadc862ca142d62f6817017f947a57a9667444a97f511
	Nov 22 00:08:45 functional-783762 kubelet[6792]: E1122 00:08:45.847331    6792 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod5cb72e66-14a4-4209-861c-be0707b73762/crio-a513bd37a85301e4f5b8ed3fdfc3c5d95ac0b6eb70bae7f406c98f01b2570972: Error finding container a513bd37a85301e4f5b8ed3fdfc3c5d95ac0b6eb70bae7f406c98f01b2570972: Status 404 returned error can't find the container with id a513bd37a85301e4f5b8ed3fdfc3c5d95ac0b6eb70bae7f406c98f01b2570972
	Nov 22 00:08:45 functional-783762 kubelet[6792]: E1122 00:08:45.847901    6792 manager.go:1116] Failed to create existing container: /kubepods/burstable/podd6a6bb5d624dd9724d6436c47c57eec9/crio-6ff5789062c7b88d0e5c9915aa01dc4f6932528a6e55ad5e178dfd4d537a1b48: Error finding container 6ff5789062c7b88d0e5c9915aa01dc4f6932528a6e55ad5e178dfd4d537a1b48: Status 404 returned error can't find the container with id 6ff5789062c7b88d0e5c9915aa01dc4f6932528a6e55ad5e178dfd4d537a1b48
	Nov 22 00:08:46 functional-783762 kubelet[6792]: E1122 00:08:46.002459    6792 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763770126000962133  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:203237}  inodes_used:{value:105}}"
	Nov 22 00:08:46 functional-783762 kubelet[6792]: E1122 00:08:46.002484    6792 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763770126000962133  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:203237}  inodes_used:{value:105}}"
	Nov 22 00:08:56 functional-783762 kubelet[6792]: E1122 00:08:56.005199    6792 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763770136004652081  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:203237}  inodes_used:{value:105}}"
	Nov 22 00:08:56 functional-783762 kubelet[6792]: E1122 00:08:56.005220    6792 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763770136004652081  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:203237}  inodes_used:{value:105}}"
	
	
	==> storage-provisioner [bb5168f65cd008045da49893b96bed7e32efb8346d66a7d9d40934ae8e2e9d0e] <==
	W1122 00:08:33.410301       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:08:35.414438       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:08:35.424764       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:08:37.428831       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:08:37.434535       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:08:39.438431       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:08:39.448429       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:08:41.451795       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:08:41.457586       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:08:43.462831       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:08:43.471807       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:08:45.476157       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:08:45.481915       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:08:47.486096       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:08:47.491949       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:08:49.496112       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:08:49.505185       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:08:51.509813       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:08:51.520350       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:08:53.523593       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:08:53.529305       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:08:55.532931       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:08:55.546650       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:08:57.554509       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:08:57.560148       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [e3cfcf54044a53d7dd88d7db26f1e416d71caaab702bc46737a7be38cccdb604] <==
	I1122 00:01:42.079950       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1122 00:01:42.087124       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-783762 -n functional-783762
helpers_test.go:269: (dbg) Run:  kubectl --context functional-783762 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-2dc5f mysql-5bb876957f-qhcz2 sp-pod dashboard-metrics-scraper-77bf4d6c4c-xc2xc kubernetes-dashboard-855c9754f9-284lt
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-783762 describe pod busybox-mount hello-node-75c85bcc94-2dc5f mysql-5bb876957f-qhcz2 sp-pod dashboard-metrics-scraper-77bf4d6c4c-xc2xc kubernetes-dashboard-855c9754f9-284lt
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-783762 describe pod busybox-mount hello-node-75c85bcc94-2dc5f mysql-5bb876957f-qhcz2 sp-pod dashboard-metrics-scraper-77bf4d6c4c-xc2xc kubernetes-dashboard-855c9754f9-284lt: exit status 1 (108.215124ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-783762/192.168.39.76
	Start Time:       Sat, 22 Nov 2025 00:02:22 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  cri-o://95d552f310077dbb71aa5b22ef7be71d04c3592f8270330f3dc6f8195d61e047
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 22 Nov 2025 00:03:50 +0000
	      Finished:     Sat, 22 Nov 2025 00:03:50 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lhmwv (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-lhmwv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  6m37s  default-scheduler  Successfully assigned default/busybox-mount to functional-783762
	  Normal  Pulling    6m37s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5m9s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.335s (1m28.082s including waiting). Image size: 4631262 bytes.
	  Normal  Created    5m9s   kubelet            Created container: mount-munger
	  Normal  Started    5m9s   kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-2dc5f
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-783762/192.168.39.76
	Start Time:       Sat, 22 Nov 2025 00:02:18 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:           10.244.0.9
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-svcql (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-svcql:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  6m41s                default-scheduler  Successfully assigned default/hello-node-75c85bcc94-2dc5f to functional-783762
	  Warning  Failed     62s (x3 over 5m47s)  kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     62s (x3 over 5m47s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    22s (x5 over 5m47s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     22s (x5 over 5m47s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    9s (x4 over 6m41s)   kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             mysql-5bb876957f-qhcz2
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-783762/192.168.39.76
	Start Time:       Sat, 22 Nov 2025 00:02:11 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dxdkj (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-dxdkj:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  6m48s                  default-scheduler  Successfully assigned default/mysql-5bb876957f-qhcz2 to functional-783762
	  Warning  Failed     4m39s (x2 over 6m17s)  kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     92s (x3 over 6m17s)    kubelet            Error: ErrImagePull
	  Warning  Failed     92s                    kubelet            Failed to pull image "docker.io/mysql:5.7": fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    53s (x5 over 6m17s)    kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     53s (x5 over 6m17s)    kubelet            Error: ImagePullBackOff
	  Normal   Pulling    41s (x4 over 6m47s)    kubelet            Pulling image "docker.io/mysql:5.7"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-783762/192.168.39.76
	Start Time:       Sat, 22 Nov 2025 00:02:20 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zv6fz (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-zv6fz:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  6m39s                 default-scheduler  Successfully assigned default/sp-pod to functional-783762
	  Warning  Failed     5m11s                 kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m8s (x2 over 5m11s)  kubelet            Error: ErrImagePull
	  Warning  Failed     2m8s                  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    115s (x2 over 5m11s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     115s (x2 over 5m11s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    102s (x3 over 6m38s)  kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-xc2xc" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-284lt" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-783762 describe pod busybox-mount hello-node-75c85bcc94-2dc5f mysql-5bb876957f-qhcz2 sp-pod dashboard-metrics-scraper-77bf4d6c4c-xc2xc kubernetes-dashboard-855c9754f9-284lt: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (302.35s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (370.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [00ac7e70-b6f1-4991-862a-db0ba26baf6c] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.0037891s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-783762 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-783762 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-783762 get pvc myclaim -o=json
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-783762 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-783762 apply -f testdata/storage-provisioner/pod.yaml
I1122 00:02:20.547472  250664 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [afa559d2-e020-4167-ab00-1415f033cb2f] Pending
helpers_test.go:352: "sp-pod" [afa559d2-e020-4167-ab00-1415f033cb2f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
functional_test_pvc_test.go:140: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 6m0s: context deadline exceeded ****
functional_test_pvc_test.go:140: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-783762 -n functional-783762
functional_test_pvc_test.go:140: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-11-22 00:08:20.806713763 +0000 UTC m=+1305.671005454
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-783762 describe po sp-pod -n default
functional_test_pvc_test.go:140: (dbg) kubectl --context functional-783762 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-783762/192.168.39.76
Start Time:       Sat, 22 Nov 2025 00:02:20 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.10
IPs:
IP:  10.244.0.10
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zv6fz (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-zv6fz:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  6m                   default-scheduler  Successfully assigned default/sp-pod to functional-783762
Warning  Failed     4m32s                kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     89s (x2 over 4m32s)  kubelet            Error: ErrImagePull
Warning  Failed     89s                  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    76s (x2 over 4m32s)  kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     76s (x2 over 4m32s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    63s (x3 over 5m59s)  kubelet            Pulling image "docker.io/nginx"
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-783762 logs sp-pod -n default
functional_test_pvc_test.go:140: (dbg) Non-zero exit: kubectl --context functional-783762 logs sp-pod -n default: exit status 1 (78.831881ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:140: kubectl --context functional-783762 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:141: failed waiting for pvctest pod : test=storage-provisioner within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-783762 -n functional-783762
helpers_test.go:252: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-783762 logs -n 25: (1.520888978s)
helpers_test.go:260: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                               ARGS                                                                │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount     │ -p functional-783762 /tmp/TestFunctionalparallelMountCmdany-port4147511950/001:/mount-9p --alsologtostderr -v=1                   │ functional-783762 │ jenkins │ v1.37.0 │ 22 Nov 25 00:02 UTC │                     │
	│ ssh       │ functional-783762 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-783762 │ jenkins │ v1.37.0 │ 22 Nov 25 00:02 UTC │                     │
	│ ssh       │ functional-783762 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-783762 │ jenkins │ v1.37.0 │ 22 Nov 25 00:02 UTC │ 22 Nov 25 00:02 UTC │
	│ ssh       │ functional-783762 ssh -- ls -la /mount-9p                                                                                         │ functional-783762 │ jenkins │ v1.37.0 │ 22 Nov 25 00:02 UTC │ 22 Nov 25 00:02 UTC │
	│ ssh       │ functional-783762 ssh cat /mount-9p/test-1763769740735697586                                                                      │ functional-783762 │ jenkins │ v1.37.0 │ 22 Nov 25 00:02 UTC │ 22 Nov 25 00:02 UTC │
	│ ssh       │ functional-783762 ssh stat /mount-9p/created-by-test                                                                              │ functional-783762 │ jenkins │ v1.37.0 │ 22 Nov 25 00:03 UTC │ 22 Nov 25 00:03 UTC │
	│ ssh       │ functional-783762 ssh stat /mount-9p/created-by-pod                                                                               │ functional-783762 │ jenkins │ v1.37.0 │ 22 Nov 25 00:03 UTC │ 22 Nov 25 00:03 UTC │
	│ ssh       │ functional-783762 ssh sudo umount -f /mount-9p                                                                                    │ functional-783762 │ jenkins │ v1.37.0 │ 22 Nov 25 00:03 UTC │ 22 Nov 25 00:03 UTC │
	│ ssh       │ functional-783762 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-783762 │ jenkins │ v1.37.0 │ 22 Nov 25 00:03 UTC │                     │
	│ mount     │ -p functional-783762 /tmp/TestFunctionalparallelMountCmdspecific-port3029369325/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-783762 │ jenkins │ v1.37.0 │ 22 Nov 25 00:03 UTC │                     │
	│ ssh       │ functional-783762 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-783762 │ jenkins │ v1.37.0 │ 22 Nov 25 00:03 UTC │ 22 Nov 25 00:03 UTC │
	│ ssh       │ functional-783762 ssh -- ls -la /mount-9p                                                                                         │ functional-783762 │ jenkins │ v1.37.0 │ 22 Nov 25 00:03 UTC │ 22 Nov 25 00:03 UTC │
	│ ssh       │ functional-783762 ssh sudo umount -f /mount-9p                                                                                    │ functional-783762 │ jenkins │ v1.37.0 │ 22 Nov 25 00:03 UTC │                     │
	│ mount     │ -p functional-783762 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1089069711/001:/mount2 --alsologtostderr -v=1                │ functional-783762 │ jenkins │ v1.37.0 │ 22 Nov 25 00:03 UTC │                     │
	│ ssh       │ functional-783762 ssh findmnt -T /mount1                                                                                          │ functional-783762 │ jenkins │ v1.37.0 │ 22 Nov 25 00:03 UTC │                     │
	│ mount     │ -p functional-783762 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1089069711/001:/mount1 --alsologtostderr -v=1                │ functional-783762 │ jenkins │ v1.37.0 │ 22 Nov 25 00:03 UTC │                     │
	│ mount     │ -p functional-783762 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1089069711/001:/mount3 --alsologtostderr -v=1                │ functional-783762 │ jenkins │ v1.37.0 │ 22 Nov 25 00:03 UTC │                     │
	│ ssh       │ functional-783762 ssh findmnt -T /mount1                                                                                          │ functional-783762 │ jenkins │ v1.37.0 │ 22 Nov 25 00:03 UTC │ 22 Nov 25 00:03 UTC │
	│ ssh       │ functional-783762 ssh findmnt -T /mount2                                                                                          │ functional-783762 │ jenkins │ v1.37.0 │ 22 Nov 25 00:03 UTC │ 22 Nov 25 00:03 UTC │
	│ ssh       │ functional-783762 ssh findmnt -T /mount3                                                                                          │ functional-783762 │ jenkins │ v1.37.0 │ 22 Nov 25 00:03 UTC │ 22 Nov 25 00:03 UTC │
	│ mount     │ -p functional-783762 --kill=true                                                                                                  │ functional-783762 │ jenkins │ v1.37.0 │ 22 Nov 25 00:03 UTC │                     │
	│ start     │ -p functional-783762 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio                           │ functional-783762 │ jenkins │ v1.37.0 │ 22 Nov 25 00:03 UTC │                     │
	│ start     │ -p functional-783762 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                     │ functional-783762 │ jenkins │ v1.37.0 │ 22 Nov 25 00:03 UTC │                     │
	│ start     │ -p functional-783762 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio                           │ functional-783762 │ jenkins │ v1.37.0 │ 22 Nov 25 00:03 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-783762 --alsologtostderr -v=1                                                                    │ functional-783762 │ jenkins │ v1.37.0 │ 22 Nov 25 00:03 UTC │                     │
	└───────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/22 00:03:56
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1122 00:03:56.869086  260652 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:03:56.869638  260652 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:03:56.869656  260652 out.go:374] Setting ErrFile to fd 2...
	I1122 00:03:56.869662  260652 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:03:56.870298  260652 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-244751/.minikube/bin
	I1122 00:03:56.871056  260652 out.go:368] Setting JSON to false
	I1122 00:03:56.872195  260652 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":27965,"bootTime":1763741872,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1122 00:03:56.872344  260652 start.go:143] virtualization: kvm guest
	I1122 00:03:56.874227  260652 out.go:179] * [functional-783762] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1122 00:03:56.875684  260652 out.go:179]   - MINIKUBE_LOCATION=21934
	I1122 00:03:56.875758  260652 notify.go:221] Checking for updates...
	I1122 00:03:56.878576  260652 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 00:03:56.880113  260652 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-244751/kubeconfig
	I1122 00:03:56.884960  260652 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-244751/.minikube
	I1122 00:03:56.886574  260652 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1122 00:03:56.887970  260652 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 00:03:56.889736  260652 config.go:182] Loaded profile config "functional-783762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:03:56.890303  260652 driver.go:422] Setting default libvirt URI to qemu:///system
	I1122 00:03:56.921947  260652 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1122 00:03:56.923317  260652 start.go:309] selected driver: kvm2
	I1122 00:03:56.923337  260652 start.go:930] validating driver "kvm2" against &{Name:functional-783762 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-783762 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.76 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:03:56.923483  260652 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 00:03:56.925829  260652 out.go:203] 
	W1122 00:03:56.927360  260652 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1122 00:03:56.928728  260652 out.go:203] 
	
	
	==> CRI-O <==
	Nov 22 00:08:21 functional-783762 crio[5816]: time="2025-11-22 00:08:21.653133030Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763770101653101755,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:177577,},InodesUsed:&UInt64Value{Value:89,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5563a119-ad04-4fff-872a-9b66eba8caea name=/runtime.v1.ImageService/ImageFsInfo
	Nov 22 00:08:21 functional-783762 crio[5816]: time="2025-11-22 00:08:21.654418202Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3a9ffa5d-890b-4830-90f8-fa232c5d4fed name=/runtime.v1.RuntimeService/ListContainers
	Nov 22 00:08:21 functional-783762 crio[5816]: time="2025-11-22 00:08:21.654514438Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3a9ffa5d-890b-4830-90f8-fa232c5d4fed name=/runtime.v1.RuntimeService/ListContainers
	Nov 22 00:08:21 functional-783762 crio[5816]: time="2025-11-22 00:08:21.654847651Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:95d552f310077dbb71aa5b22ef7be71d04c3592f8270330f3dc6f8195d61e047,PodSandboxId:2c249bc4e15ab7b56cf51f9078096523a863e77b8bd9e308dd5311716682c55d,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1763769830827400746,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 47d8e8ac-6e19-45c9-a495-6a6b6848a8e4,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26451e4d2e6ccdca80e0db009442a15061a408c0cba565df9e5041bf158049c3,PodSandboxId:93fa5fdbb6fc28f05008e5c01c96f3db881614c1406ddf48130fa76b31fc64be,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1763769732046845788,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-7d85dfc575-cjd8h,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4abfb066-72db-4ce1-8963-25d7bee932a0,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d861ebb3bc7a4b2eef3ae0f5920853815e45e2f5a7c904c15da9579bf1ed8354,PodSandboxId:189d6f9812ba4d12f4fd6992624eeaa7ac80f8f3025d87e553934a3133f6a9a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763769707973235782,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59710b70e8d30db93937d9019eab0a9a,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name
\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:937b2a1e9b101160e7748ede304ccb8d1525f30ab7403c6da8c9e069f1e199d0,PodSandboxId:189d6f9812ba4d12f4fd6992624eeaa7ac80f8f3025d87e553934a3133f6a9a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1763769706921898308,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59710b70e8d30db93937d9019eab0a9a,},A
nnotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb5168f65cd008045da49893b96bed7e32efb8346d66a7d9d40934ae8e2e9d0e,PodSandboxId:5ba6b9ce4a22a38fa5e58daab9cadfb75bbb49799335ea6f577faa2ddad865e0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763769706766851201,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisi
oner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00ac7e70-b6f1-4991-862a-db0ba26baf6c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fa18a4bc25e43ee3b48de6de00a157e952e7608951c944d746541a037c7d494,PodSandboxId:a8234273d863ee2cc98ae5cd8f82dc2da86843090e62e58decc21cdf13c668b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763769706814494400,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4hlw7,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: 5cb72e66-14a4-4209-861c-be0707b73762,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:622d332041e786701191f78a09bf73dd9406c9297fe6831bf955fa085b914c9d,PodSandboxId:df91e6302180a62a7b6581045d8002297595b188ee7a94262ba4105754685773,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e5
61a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763769702074990416,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6a6bb5d624dd9724d6436c47c57eec9,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae7059d8a3af78a4a0c906c515245dc6d4414fc28d580cab2fddc21a5a3e5385,PodSandboxId:0f7dd710594cad5757444e9682b0fa16f3e34d7039a34fd992c885d80b8d25
8e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763769701977969247,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b393547de92689fa32a14bea69079b0,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0c435c405010db72805d64d746d7f
7105f7f12029df399df5284c3e900e2773,PodSandboxId:5bb0fd666e6bd9751e9fdfd576d9503c08d73449309add09c5734a2386ef52a1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763769701785735881,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6cqt7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3381fa89-cdde-43bf-a38c-f140281f28af,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67c5ba38a97236a5f6237635c69b5123297b3dad277fc5793f3173862464829
d,PodSandboxId:8ba98d2cdc1e383d4c406e30f212b0ab0d901dada207b961499db4ed4d1e26df,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763769701716886607,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5017c5ab7e5f7612c69e258534ddf5e4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},}
,&Container{Id:e3cfcf54044a53d7dd88d7db26f1e416d71caaab702bc46737a7be38cccdb604,PodSandboxId:5ba6b9ce4a22a38fa5e58daab9cadfb75bbb49799335ea6f577faa2ddad865e0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1763769701689668484,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00ac7e70-b6f1-4991-862a-db0ba26baf6c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{I
d:c0a6e5bbcecef81643a8b1f25289fb2a355f42d76ed00755be12d759e4376728,PodSandboxId:e4ec1069bc01988ab7cbadc862ca142d62f6817017f947a57a9667444a97f511,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1763769664906347403,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6cqt7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3381fa89-cdde-43bf-a38c-f140281f28af,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04cf4193002456e3aa12568d11c5337a
f3f25e972ef32773b772906e32177b19,PodSandboxId:6ff5789062c7b88d0e5c9915aa01dc4f6932528a6e55ad5e178dfd4d537a1b48,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1763769660274653220,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6a6bb5d624dd9724d6436c47c57eec9,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f35efea65afed5a14e33a117e8d2e86725efb1398a611e9c9ad4b66ff388490c,PodSandboxId:39e5a20696df9463955e23ed29895cc87e900159279313bc5090d724c85ccfdf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1763769660277144916,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b393547de92689fa32a14bea69079b0,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.
container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5219aacb2ecc8f9ab6b7a4db8a50a890fedaa0848f7b8ec907cbcacfc3c6344,PodSandboxId:e9301a0a5805e97cdf0f30e8f35504b79f69bab035854b5cb6172b3633607a5d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1763769660245120828,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5017c5ab7e5f7612c69e258534ddf5e4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\
"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2853482cd778dbcfedab3ca5c70c06d4b54b2ac79c80e85fa4d6427fb51d475f,PodSandboxId:a513bd37a85301e4f5b8ed3fdfc3c5d95ac0b6eb70bae7f406c98f01b2570972,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1763769643913271123,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4hlw7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cb72e66-14a4-4209-861c-be0707b73762,},Annotations:map[string]string{io.kuber
netes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3a9ffa5d-890b-4830-90f8-fa232c5d4fed name=/runtime.v1.RuntimeService/ListContainers
	Nov 22 00:08:21 functional-783762 crio[5816]: time="2025-11-22 00:08:21.700747635Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d9c0bbf1-ddcd-4848-a2d4-873089037225 name=/runtime.v1.RuntimeService/Version
	Nov 22 00:08:21 functional-783762 crio[5816]: time="2025-11-22 00:08:21.700850339Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d9c0bbf1-ddcd-4848-a2d4-873089037225 name=/runtime.v1.RuntimeService/Version
	Nov 22 00:08:21 functional-783762 crio[5816]: time="2025-11-22 00:08:21.702711458Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=45fcacd6-987a-4c30-bb70-15533fa621ed name=/runtime.v1.ImageService/ImageFsInfo
	Nov 22 00:08:21 functional-783762 crio[5816]: time="2025-11-22 00:08:21.703369149Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763770101703340718,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:177577,},InodesUsed:&UInt64Value{Value:89,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=45fcacd6-987a-4c30-bb70-15533fa621ed name=/runtime.v1.ImageService/ImageFsInfo
	Nov 22 00:08:21 functional-783762 crio[5816]: time="2025-11-22 00:08:21.704753917Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=49ec4c54-e3c0-4f1f-bd73-3daa2037b491 name=/runtime.v1.RuntimeService/ListContainers
	Nov 22 00:08:21 functional-783762 crio[5816]: time="2025-11-22 00:08:21.705124922Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=49ec4c54-e3c0-4f1f-bd73-3daa2037b491 name=/runtime.v1.RuntimeService/ListContainers
	Nov 22 00:08:21 functional-783762 crio[5816]: time="2025-11-22 00:08:21.706093562Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:95d552f310077dbb71aa5b22ef7be71d04c3592f8270330f3dc6f8195d61e047,PodSandboxId:2c249bc4e15ab7b56cf51f9078096523a863e77b8bd9e308dd5311716682c55d,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1763769830827400746,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 47d8e8ac-6e19-45c9-a495-6a6b6848a8e4,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26451e4d2e6ccdca80e0db009442a15061a408c0cba565df9e5041bf158049c3,PodSandboxId:93fa5fdbb6fc28f05008e5c01c96f3db881614c1406ddf48130fa76b31fc64be,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1763769732046845788,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-7d85dfc575-cjd8h,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4abfb066-72db-4ce1-8963-25d7bee932a0,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d861ebb3bc7a4b2eef3ae0f5920853815e45e2f5a7c904c15da9579bf1ed8354,PodSandboxId:189d6f9812ba4d12f4fd6992624eeaa7ac80f8f3025d87e553934a3133f6a9a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763769707973235782,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59710b70e8d30db93937d9019eab0a9a,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name
\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:937b2a1e9b101160e7748ede304ccb8d1525f30ab7403c6da8c9e069f1e199d0,PodSandboxId:189d6f9812ba4d12f4fd6992624eeaa7ac80f8f3025d87e553934a3133f6a9a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1763769706921898308,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59710b70e8d30db93937d9019eab0a9a,},A
nnotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb5168f65cd008045da49893b96bed7e32efb8346d66a7d9d40934ae8e2e9d0e,PodSandboxId:5ba6b9ce4a22a38fa5e58daab9cadfb75bbb49799335ea6f577faa2ddad865e0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763769706766851201,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisi
oner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00ac7e70-b6f1-4991-862a-db0ba26baf6c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fa18a4bc25e43ee3b48de6de00a157e952e7608951c944d746541a037c7d494,PodSandboxId:a8234273d863ee2cc98ae5cd8f82dc2da86843090e62e58decc21cdf13c668b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763769706814494400,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4hlw7,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: 5cb72e66-14a4-4209-861c-be0707b73762,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:622d332041e786701191f78a09bf73dd9406c9297fe6831bf955fa085b914c9d,PodSandboxId:df91e6302180a62a7b6581045d8002297595b188ee7a94262ba4105754685773,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e5
61a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763769702074990416,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6a6bb5d624dd9724d6436c47c57eec9,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae7059d8a3af78a4a0c906c515245dc6d4414fc28d580cab2fddc21a5a3e5385,PodSandboxId:0f7dd710594cad5757444e9682b0fa16f3e34d7039a34fd992c885d80b8d25
8e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763769701977969247,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b393547de92689fa32a14bea69079b0,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0c435c405010db72805d64d746d7f
7105f7f12029df399df5284c3e900e2773,PodSandboxId:5bb0fd666e6bd9751e9fdfd576d9503c08d73449309add09c5734a2386ef52a1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763769701785735881,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6cqt7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3381fa89-cdde-43bf-a38c-f140281f28af,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67c5ba38a97236a5f6237635c69b5123297b3dad277fc5793f3173862464829
d,PodSandboxId:8ba98d2cdc1e383d4c406e30f212b0ab0d901dada207b961499db4ed4d1e26df,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763769701716886607,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5017c5ab7e5f7612c69e258534ddf5e4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},}
,&Container{Id:e3cfcf54044a53d7dd88d7db26f1e416d71caaab702bc46737a7be38cccdb604,PodSandboxId:5ba6b9ce4a22a38fa5e58daab9cadfb75bbb49799335ea6f577faa2ddad865e0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1763769701689668484,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00ac7e70-b6f1-4991-862a-db0ba26baf6c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{I
d:c0a6e5bbcecef81643a8b1f25289fb2a355f42d76ed00755be12d759e4376728,PodSandboxId:e4ec1069bc01988ab7cbadc862ca142d62f6817017f947a57a9667444a97f511,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1763769664906347403,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6cqt7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3381fa89-cdde-43bf-a38c-f140281f28af,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04cf4193002456e3aa12568d11c5337a
f3f25e972ef32773b772906e32177b19,PodSandboxId:6ff5789062c7b88d0e5c9915aa01dc4f6932528a6e55ad5e178dfd4d537a1b48,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1763769660274653220,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6a6bb5d624dd9724d6436c47c57eec9,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f35efea65afed5a14e33a117e8d2e86725efb1398a611e9c9ad4b66ff388490c,PodSandboxId:39e5a20696df9463955e23ed29895cc87e900159279313bc5090d724c85ccfdf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1763769660277144916,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b393547de92689fa32a14bea69079b0,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.
container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5219aacb2ecc8f9ab6b7a4db8a50a890fedaa0848f7b8ec907cbcacfc3c6344,PodSandboxId:e9301a0a5805e97cdf0f30e8f35504b79f69bab035854b5cb6172b3633607a5d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1763769660245120828,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5017c5ab7e5f7612c69e258534ddf5e4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\
"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2853482cd778dbcfedab3ca5c70c06d4b54b2ac79c80e85fa4d6427fb51d475f,PodSandboxId:a513bd37a85301e4f5b8ed3fdfc3c5d95ac0b6eb70bae7f406c98f01b2570972,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1763769643913271123,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4hlw7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cb72e66-14a4-4209-861c-be0707b73762,},Annotations:map[string]string{io.kuber
netes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=49ec4c54-e3c0-4f1f-bd73-3daa2037b491 name=/runtime.v1.RuntimeService/ListContainers
	Nov 22 00:08:21 functional-783762 crio[5816]: time="2025-11-22 00:08:21.743278941Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d208c271-d6b4-4183-bcf6-77ea88ca9944 name=/runtime.v1.RuntimeService/Version
	Nov 22 00:08:21 functional-783762 crio[5816]: time="2025-11-22 00:08:21.743381547Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d208c271-d6b4-4183-bcf6-77ea88ca9944 name=/runtime.v1.RuntimeService/Version
	Nov 22 00:08:21 functional-783762 crio[5816]: time="2025-11-22 00:08:21.744723371Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f9855f37-b2f9-4caa-918e-85afbe3fa952 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 22 00:08:21 functional-783762 crio[5816]: time="2025-11-22 00:08:21.745377570Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763770101745351268,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:177577,},InodesUsed:&UInt64Value{Value:89,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f9855f37-b2f9-4caa-918e-85afbe3fa952 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 22 00:08:21 functional-783762 crio[5816]: time="2025-11-22 00:08:21.746540373Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e1fb1f7d-6fe0-4b0e-8874-70d30e31725f name=/runtime.v1.RuntimeService/ListContainers
	Nov 22 00:08:21 functional-783762 crio[5816]: time="2025-11-22 00:08:21.746625497Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e1fb1f7d-6fe0-4b0e-8874-70d30e31725f name=/runtime.v1.RuntimeService/ListContainers
	Nov 22 00:08:21 functional-783762 crio[5816]: time="2025-11-22 00:08:21.747304324Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:95d552f310077dbb71aa5b22ef7be71d04c3592f8270330f3dc6f8195d61e047,PodSandboxId:2c249bc4e15ab7b56cf51f9078096523a863e77b8bd9e308dd5311716682c55d,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1763769830827400746,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 47d8e8ac-6e19-45c9-a495-6a6b6848a8e4,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26451e4d2e6ccdca80e0db009442a15061a408c0cba565df9e5041bf158049c3,PodSandboxId:93fa5fdbb6fc28f05008e5c01c96f3db881614c1406ddf48130fa76b31fc64be,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1763769732046845788,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-7d85dfc575-cjd8h,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4abfb066-72db-4ce1-8963-25d7bee932a0,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d861ebb3bc7a4b2eef3ae0f5920853815e45e2f5a7c904c15da9579bf1ed8354,PodSandboxId:189d6f9812ba4d12f4fd6992624eeaa7ac80f8f3025d87e553934a3133f6a9a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763769707973235782,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59710b70e8d30db93937d9019eab0a9a,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name
\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:937b2a1e9b101160e7748ede304ccb8d1525f30ab7403c6da8c9e069f1e199d0,PodSandboxId:189d6f9812ba4d12f4fd6992624eeaa7ac80f8f3025d87e553934a3133f6a9a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1763769706921898308,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59710b70e8d30db93937d9019eab0a9a,},A
nnotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb5168f65cd008045da49893b96bed7e32efb8346d66a7d9d40934ae8e2e9d0e,PodSandboxId:5ba6b9ce4a22a38fa5e58daab9cadfb75bbb49799335ea6f577faa2ddad865e0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763769706766851201,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisi
oner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00ac7e70-b6f1-4991-862a-db0ba26baf6c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fa18a4bc25e43ee3b48de6de00a157e952e7608951c944d746541a037c7d494,PodSandboxId:a8234273d863ee2cc98ae5cd8f82dc2da86843090e62e58decc21cdf13c668b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763769706814494400,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4hlw7,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: 5cb72e66-14a4-4209-861c-be0707b73762,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:622d332041e786701191f78a09bf73dd9406c9297fe6831bf955fa085b914c9d,PodSandboxId:df91e6302180a62a7b6581045d8002297595b188ee7a94262ba4105754685773,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e5
61a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763769702074990416,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6a6bb5d624dd9724d6436c47c57eec9,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae7059d8a3af78a4a0c906c515245dc6d4414fc28d580cab2fddc21a5a3e5385,PodSandboxId:0f7dd710594cad5757444e9682b0fa16f3e34d7039a34fd992c885d80b8d25
8e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763769701977969247,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b393547de92689fa32a14bea69079b0,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0c435c405010db72805d64d746d7f
7105f7f12029df399df5284c3e900e2773,PodSandboxId:5bb0fd666e6bd9751e9fdfd576d9503c08d73449309add09c5734a2386ef52a1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763769701785735881,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6cqt7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3381fa89-cdde-43bf-a38c-f140281f28af,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67c5ba38a97236a5f6237635c69b5123297b3dad277fc5793f3173862464829
d,PodSandboxId:8ba98d2cdc1e383d4c406e30f212b0ab0d901dada207b961499db4ed4d1e26df,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763769701716886607,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5017c5ab7e5f7612c69e258534ddf5e4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},}
,&Container{Id:e3cfcf54044a53d7dd88d7db26f1e416d71caaab702bc46737a7be38cccdb604,PodSandboxId:5ba6b9ce4a22a38fa5e58daab9cadfb75bbb49799335ea6f577faa2ddad865e0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1763769701689668484,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00ac7e70-b6f1-4991-862a-db0ba26baf6c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{I
d:c0a6e5bbcecef81643a8b1f25289fb2a355f42d76ed00755be12d759e4376728,PodSandboxId:e4ec1069bc01988ab7cbadc862ca142d62f6817017f947a57a9667444a97f511,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1763769664906347403,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6cqt7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3381fa89-cdde-43bf-a38c-f140281f28af,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04cf4193002456e3aa12568d11c5337a
f3f25e972ef32773b772906e32177b19,PodSandboxId:6ff5789062c7b88d0e5c9915aa01dc4f6932528a6e55ad5e178dfd4d537a1b48,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1763769660274653220,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6a6bb5d624dd9724d6436c47c57eec9,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f35efea65afed5a14e33a117e8d2e86725efb1398a611e9c9ad4b66ff388490c,PodSandboxId:39e5a20696df9463955e23ed29895cc87e900159279313bc5090d724c85ccfdf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1763769660277144916,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b393547de92689fa32a14bea69079b0,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.
container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5219aacb2ecc8f9ab6b7a4db8a50a890fedaa0848f7b8ec907cbcacfc3c6344,PodSandboxId:e9301a0a5805e97cdf0f30e8f35504b79f69bab035854b5cb6172b3633607a5d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1763769660245120828,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5017c5ab7e5f7612c69e258534ddf5e4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\
"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2853482cd778dbcfedab3ca5c70c06d4b54b2ac79c80e85fa4d6427fb51d475f,PodSandboxId:a513bd37a85301e4f5b8ed3fdfc3c5d95ac0b6eb70bae7f406c98f01b2570972,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1763769643913271123,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4hlw7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cb72e66-14a4-4209-861c-be0707b73762,},Annotations:map[string]string{io.kuber
netes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e1fb1f7d-6fe0-4b0e-8874-70d30e31725f name=/runtime.v1.RuntimeService/ListContainers
	Nov 22 00:08:21 functional-783762 crio[5816]: time="2025-11-22 00:08:21.778564017Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e2065e4c-d5dd-409a-b9c2-45e1a1dabade name=/runtime.v1.RuntimeService/Version
	Nov 22 00:08:21 functional-783762 crio[5816]: time="2025-11-22 00:08:21.778767115Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e2065e4c-d5dd-409a-b9c2-45e1a1dabade name=/runtime.v1.RuntimeService/Version
	Nov 22 00:08:21 functional-783762 crio[5816]: time="2025-11-22 00:08:21.781445691Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a40984bc-ed1a-4829-ac7a-ec6c56dbce0c name=/runtime.v1.ImageService/ImageFsInfo
	Nov 22 00:08:21 functional-783762 crio[5816]: time="2025-11-22 00:08:21.783007139Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763770101782975171,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:177577,},InodesUsed:&UInt64Value{Value:89,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a40984bc-ed1a-4829-ac7a-ec6c56dbce0c name=/runtime.v1.ImageService/ImageFsInfo
	Nov 22 00:08:21 functional-783762 crio[5816]: time="2025-11-22 00:08:21.784347260Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=19a63df8-431e-4396-8f30-32cd368ccffc name=/runtime.v1.RuntimeService/ListContainers
	Nov 22 00:08:21 functional-783762 crio[5816]: time="2025-11-22 00:08:21.784581624Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=19a63df8-431e-4396-8f30-32cd368ccffc name=/runtime.v1.RuntimeService/ListContainers
	Nov 22 00:08:21 functional-783762 crio[5816]: time="2025-11-22 00:08:21.785415975Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:95d552f310077dbb71aa5b22ef7be71d04c3592f8270330f3dc6f8195d61e047,PodSandboxId:2c249bc4e15ab7b56cf51f9078096523a863e77b8bd9e308dd5311716682c55d,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1763769830827400746,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 47d8e8ac-6e19-45c9-a495-6a6b6848a8e4,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26451e4d2e6ccdca80e0db009442a15061a408c0cba565df9e5041bf158049c3,PodSandboxId:93fa5fdbb6fc28f05008e5c01c96f3db881614c1406ddf48130fa76b31fc64be,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1763769732046845788,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-7d85dfc575-cjd8h,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4abfb066-72db-4ce1-8963-25d7bee932a0,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d861ebb3bc7a4b2eef3ae0f5920853815e45e2f5a7c904c15da9579bf1ed8354,PodSandboxId:189d6f9812ba4d12f4fd6992624eeaa7ac80f8f3025d87e553934a3133f6a9a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763769707973235782,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59710b70e8d30db93937d9019eab0a9a,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name
\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:937b2a1e9b101160e7748ede304ccb8d1525f30ab7403c6da8c9e069f1e199d0,PodSandboxId:189d6f9812ba4d12f4fd6992624eeaa7ac80f8f3025d87e553934a3133f6a9a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1763769706921898308,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59710b70e8d30db93937d9019eab0a9a,},A
nnotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb5168f65cd008045da49893b96bed7e32efb8346d66a7d9d40934ae8e2e9d0e,PodSandboxId:5ba6b9ce4a22a38fa5e58daab9cadfb75bbb49799335ea6f577faa2ddad865e0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763769706766851201,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisi
oner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00ac7e70-b6f1-4991-862a-db0ba26baf6c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fa18a4bc25e43ee3b48de6de00a157e952e7608951c944d746541a037c7d494,PodSandboxId:a8234273d863ee2cc98ae5cd8f82dc2da86843090e62e58decc21cdf13c668b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763769706814494400,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4hlw7,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: 5cb72e66-14a4-4209-861c-be0707b73762,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:622d332041e786701191f78a09bf73dd9406c9297fe6831bf955fa085b914c9d,PodSandboxId:df91e6302180a62a7b6581045d8002297595b188ee7a94262ba4105754685773,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e5
61a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763769702074990416,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6a6bb5d624dd9724d6436c47c57eec9,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae7059d8a3af78a4a0c906c515245dc6d4414fc28d580cab2fddc21a5a3e5385,PodSandboxId:0f7dd710594cad5757444e9682b0fa16f3e34d7039a34fd992c885d80b8d25
8e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763769701977969247,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b393547de92689fa32a14bea69079b0,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0c435c405010db72805d64d746d7f
7105f7f12029df399df5284c3e900e2773,PodSandboxId:5bb0fd666e6bd9751e9fdfd576d9503c08d73449309add09c5734a2386ef52a1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763769701785735881,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6cqt7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3381fa89-cdde-43bf-a38c-f140281f28af,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67c5ba38a97236a5f6237635c69b5123297b3dad277fc5793f3173862464829
d,PodSandboxId:8ba98d2cdc1e383d4c406e30f212b0ab0d901dada207b961499db4ed4d1e26df,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763769701716886607,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5017c5ab7e5f7612c69e258534ddf5e4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},}
,&Container{Id:e3cfcf54044a53d7dd88d7db26f1e416d71caaab702bc46737a7be38cccdb604,PodSandboxId:5ba6b9ce4a22a38fa5e58daab9cadfb75bbb49799335ea6f577faa2ddad865e0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1763769701689668484,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00ac7e70-b6f1-4991-862a-db0ba26baf6c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{I
d:c0a6e5bbcecef81643a8b1f25289fb2a355f42d76ed00755be12d759e4376728,PodSandboxId:e4ec1069bc01988ab7cbadc862ca142d62f6817017f947a57a9667444a97f511,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1763769664906347403,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6cqt7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3381fa89-cdde-43bf-a38c-f140281f28af,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04cf4193002456e3aa12568d11c5337a
f3f25e972ef32773b772906e32177b19,PodSandboxId:6ff5789062c7b88d0e5c9915aa01dc4f6932528a6e55ad5e178dfd4d537a1b48,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1763769660274653220,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6a6bb5d624dd9724d6436c47c57eec9,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f35efea65afed5a14e33a117e8d2e86725efb1398a611e9c9ad4b66ff388490c,PodSandboxId:39e5a20696df9463955e23ed29895cc87e900159279313bc5090d724c85ccfdf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1763769660277144916,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b393547de92689fa32a14bea69079b0,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.
container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5219aacb2ecc8f9ab6b7a4db8a50a890fedaa0848f7b8ec907cbcacfc3c6344,PodSandboxId:e9301a0a5805e97cdf0f30e8f35504b79f69bab035854b5cb6172b3633607a5d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1763769660245120828,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5017c5ab7e5f7612c69e258534ddf5e4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\
"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2853482cd778dbcfedab3ca5c70c06d4b54b2ac79c80e85fa4d6427fb51d475f,PodSandboxId:a513bd37a85301e4f5b8ed3fdfc3c5d95ac0b6eb70bae7f406c98f01b2570972,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1763769643913271123,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4hlw7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cb72e66-14a4-4209-861c-be0707b73762,},Annotations:map[string]string{io.kuber
netes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=19a63df8-431e-4396-8f30-32cd368ccffc name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	95d552f310077       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e     4 minutes ago       Exited              mount-munger              0                   2c249bc4e15ab       busybox-mount                               default
	26451e4d2e6cc       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6   6 minutes ago       Running             echo-server               0                   93fa5fdbb6fc2       hello-node-connect-7d85dfc575-cjd8h         default
	d861ebb3bc7a4       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                        6 minutes ago       Running             kube-apiserver            1                   189d6f9812ba4       kube-apiserver-functional-783762            kube-system
	937b2a1e9b101       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                        6 minutes ago       Exited              kube-apiserver            0                   189d6f9812ba4       kube-apiserver-functional-783762            kube-system
	3fa18a4bc25e4       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                        6 minutes ago       Running             coredns                   2                   a8234273d863e       coredns-66bc5c9577-4hlw7                    kube-system
	bb5168f65cd00       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                        6 minutes ago       Running             storage-provisioner       5                   5ba6b9ce4a22a       storage-provisioner                         kube-system
	622d332041e78       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                        6 minutes ago       Running             kube-controller-manager   3                   df91e6302180a       kube-controller-manager-functional-783762   kube-system
	ae7059d8a3af7       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                        6 minutes ago       Running             kube-scheduler            3                   0f7dd710594ca       kube-scheduler-functional-783762            kube-system
	c0c435c405010       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                        6 minutes ago       Running             kube-proxy                3                   5bb0fd666e6bd       kube-proxy-6cqt7                            kube-system
	67c5ba38a9723       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                        6 minutes ago       Running             etcd                      3                   8ba98d2cdc1e3       etcd-functional-783762                      kube-system
	e3cfcf54044a5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                        6 minutes ago       Exited              storage-provisioner       4                   5ba6b9ce4a22a       storage-provisioner                         kube-system
	c0a6e5bbcecef       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                        7 minutes ago       Exited              kube-proxy                2                   e4ec1069bc019       kube-proxy-6cqt7                            kube-system
	f35efea65afed       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                        7 minutes ago       Exited              kube-scheduler            2                   39e5a20696df9       kube-scheduler-functional-783762            kube-system
	04cf419300245       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                        7 minutes ago       Exited              kube-controller-manager   2                   6ff5789062c7b       kube-controller-manager-functional-783762   kube-system
	b5219aacb2ecc       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                        7 minutes ago       Exited              etcd                      2                   e9301a0a5805e       etcd-functional-783762                      kube-system
	2853482cd778d       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                        7 minutes ago       Exited              coredns                   1                   a513bd37a8530       coredns-66bc5c9577-4hlw7                    kube-system
	
	
	==> coredns [2853482cd778dbcfedab3ca5c70c06d4b54b2ac79c80e85fa4d6427fb51d475f] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56778 - 24880 "HINFO IN 510444860572811029.660604583740510837. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.498605514s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [3fa18a4bc25e43ee3b48de6de00a157e952e7608951c944d746541a037c7d494] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: watch of *v1.Namespace ended with: very short watch: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: Unexpected watch close - watch lasted less than a second and no items received
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: watch of *v1.Service ended with: very short watch: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: Unexpected watch close - watch lasted less than a second and no items received
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: watch of *v1.EndpointSlice ended with: very short watch: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: Unexpected watch close - watch lasted less than a second and no items received
	[INFO] 127.0.0.1:46548 - 19382 "HINFO IN 3244271735892804347.7610011136115581116. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.855078147s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               functional-783762
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-783762
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785
	                    minikube.k8s.io/name=functional-783762
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_22T00_00_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 22 Nov 2025 00:00:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-783762
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 22 Nov 2025 00:08:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 22 Nov 2025 00:04:19 +0000   Sat, 22 Nov 2025 00:00:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 22 Nov 2025 00:04:19 +0000   Sat, 22 Nov 2025 00:00:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 22 Nov 2025 00:04:19 +0000   Sat, 22 Nov 2025 00:00:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 22 Nov 2025 00:04:19 +0000   Sat, 22 Nov 2025 00:00:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.76
	  Hostname:    functional-783762
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001796Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001796Ki
	  pods:               110
	System Info:
	  Machine ID:                 d561c82d24c84e51aed0106657c2085c
	  System UUID:                d561c82d-24c8-4e51-aed0-106657c2085c
	  Boot ID:                    b0cf542f-e0b4-488f-b149-70c03b493ebb
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-2dc5f                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m4s
	  default                     hello-node-connect-7d85dfc575-cjd8h           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m12s
	  default                     mysql-5bb876957f-qhcz2                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    6m11s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m2s
	  kube-system                 coredns-66bc5c9577-4hlw7                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     8m6s
	  kube-system                 etcd-functional-783762                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         8m13s
	  kube-system                 kube-apiserver-functional-783762              250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m32s
	  kube-system                 kube-controller-manager-functional-783762     200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m11s
	  kube-system                 kube-proxy-6cqt7                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m7s
	  kube-system                 kube-scheduler-functional-783762              100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m11s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m4s
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-xc2xc    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m24s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-284lt         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m5s                   kube-proxy       
	  Normal  Starting                 6m25s                  kube-proxy       
	  Normal  Starting                 7m16s                  kube-proxy       
	  Normal  Starting                 7m35s                  kube-proxy       
	  Normal  Starting                 8m19s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m18s (x8 over 8m19s)  kubelet          Node functional-783762 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m18s (x8 over 8m19s)  kubelet          Node functional-783762 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m18s (x7 over 8m19s)  kubelet          Node functional-783762 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 8m11s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m11s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m11s                  kubelet          Node functional-783762 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m11s                  kubelet          Node functional-783762 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m11s                  kubelet          Node functional-783762 status is now: NodeHasSufficientPID
	  Normal  NodeReady                8m10s                  kubelet          Node functional-783762 status is now: NodeReady
	  Normal  RegisteredNode           8m7s                   node-controller  Node functional-783762 event: Registered Node functional-783762 in Controller
	  Normal  CIDRAssignmentFailed     8m7s                   cidrAllocator    Node functional-783762 status is now: CIDRAssignmentFailed
	  Normal  NodeAllocatableEnforced  7m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m23s (x8 over 7m23s)  kubelet          Node functional-783762 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m23s (x8 over 7m23s)  kubelet          Node functional-783762 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m23s (x7 over 7m23s)  kubelet          Node functional-783762 status is now: NodeHasSufficientPID
	  Normal  Starting                 7m23s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           7m15s                  node-controller  Node functional-783762 event: Registered Node functional-783762 in Controller
	  Normal  Starting                 6m37s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m37s (x3 over 6m37s)  kubelet          Node functional-783762 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m37s (x3 over 6m37s)  kubelet          Node functional-783762 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m37s (x3 over 6m37s)  kubelet          Node functional-783762 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m37s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m29s                  node-controller  Node functional-783762 event: Registered Node functional-783762 in Controller
	
	
	==> dmesg <==
	[  +0.000045] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.000013] (rpcbind)[118]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.184426] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.083639] kauditd_printk_skb: 1 callbacks suppressed
	[Nov22 00:00] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.145077] kauditd_printk_skb: 171 callbacks suppressed
	[  +5.340851] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.297709] kauditd_printk_skb: 252 callbacks suppressed
	[  +0.113526] kauditd_printk_skb: 44 callbacks suppressed
	[  +4.759596] kauditd_printk_skb: 56 callbacks suppressed
	[  +4.379699] kauditd_printk_skb: 284 callbacks suppressed
	[Nov22 00:01] kauditd_printk_skb: 98 callbacks suppressed
	[  +7.640718] kauditd_printk_skb: 32 callbacks suppressed
	[  +6.360006] kauditd_printk_skb: 2 callbacks suppressed
	[  +0.113329] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.570529] kauditd_printk_skb: 78 callbacks suppressed
	[  +1.004551] kauditd_printk_skb: 277 callbacks suppressed
	[  +4.444666] kauditd_printk_skb: 70 callbacks suppressed
	[Nov22 00:02] kauditd_printk_skb: 133 callbacks suppressed
	[  +6.671222] kauditd_printk_skb: 53 callbacks suppressed
	[ +19.842391] kauditd_printk_skb: 110 callbacks suppressed
	[Nov22 00:03] kauditd_printk_skb: 25 callbacks suppressed
	[Nov22 00:05] kauditd_printk_skb: 74 callbacks suppressed
	
	
	==> etcd [67c5ba38a97236a5f6237635c69b5123297b3dad277fc5793f3173862464829d] <==
	{"level":"warn","ts":"2025-11-22T00:01:49.532581Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:49.550609Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:49.553213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:49.561695Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:49.570903Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:49.578124Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:49.585459Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:49.598496Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:49.610962Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:49.614359Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:49.625644Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:49.637505Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:49.645274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:49.657386Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:49.668836Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:49.678186Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:49.688728Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:49.698178Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:49.707480Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:49.718525Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:49.733117Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:49.748915Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:49.768596Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:49.775168Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:49.821712Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42830","server-name":"","error":"EOF"}
	
	
	==> etcd [b5219aacb2ecc8f9ab6b7a4db8a50a890fedaa0848f7b8ec907cbcacfc3c6344] <==
	{"level":"warn","ts":"2025-11-22T00:01:02.790222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:02.806118Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:02.847308Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:02.872404Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:02.902639Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:02.922989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:03.026408Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51086","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-22T00:01:27.413794Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-22T00:01:27.413883Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-783762","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.76:2380"],"advertise-client-urls":["https://192.168.39.76:2379"]}
	{"level":"error","ts":"2025-11-22T00:01:27.420370Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-22T00:01:27.493936Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"info","ts":"2025-11-22T00:01:27.494086Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"4f06aa0eaa8889d9","current-leader-member-id":"4f06aa0eaa8889d9"}
	{"level":"info","ts":"2025-11-22T00:01:27.494207Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-22T00:01:27.494218Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"error","ts":"2025-11-22T00:01:27.493995Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-22T00:01:27.494425Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-22T00:01:27.494488Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-22T00:01:27.494495Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-22T00:01:27.494528Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.76:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-22T00:01:27.494535Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.76:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-22T00:01:27.494540Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.76:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-22T00:01:27.497717Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.76:2380"}
	{"level":"error","ts":"2025-11-22T00:01:27.497803Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.76:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-22T00:01:27.497831Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.76:2380"}
	{"level":"info","ts":"2025-11-22T00:01:27.497836Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-783762","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.76:2380"],"advertise-client-urls":["https://192.168.39.76:2379"]}
	
	
	==> kernel <==
	 00:08:22 up 8 min,  0 users,  load average: 0.18, 0.37, 0.28
	Linux functional-783762 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Nov 19 01:10:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [937b2a1e9b101160e7748ede304ccb8d1525f30ab7403c6da8c9e069f1e199d0] <==
	I1122 00:01:47.323261       1 options.go:263] external host was not specified, using 192.168.39.76
	I1122 00:01:47.342461       1 server.go:150] Version: v1.34.1
	I1122 00:01:47.344089       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E1122 00:01:47.350369       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	
	
	==> kube-apiserver [d861ebb3bc7a4b2eef3ae0f5920853815e45e2f5a7c904c15da9579bf1ed8354] <==
	I1122 00:01:50.581637       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1122 00:01:50.584164       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1122 00:01:50.584812       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1122 00:01:50.613117       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1122 00:01:50.617194       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1122 00:01:50.617253       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1122 00:01:50.633429       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1122 00:01:50.633455       1 policy_source.go:240] refreshing policies
	I1122 00:01:50.668250       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1122 00:01:51.389775       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1122 00:01:52.299244       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1122 00:01:52.312669       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1122 00:01:52.363823       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1122 00:01:52.396406       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1122 00:01:52.411780       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1122 00:01:54.192121       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1122 00:01:54.242406       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1122 00:01:54.289696       1 controller.go:667] quota admission added evaluator for: endpoints
	I1122 00:02:05.981512       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.100.230.73"}
	I1122 00:02:10.365289       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.110.229.206"}
	I1122 00:02:11.008541       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.97.112.146"}
	I1122 00:02:18.200764       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.103.69.213"}
	I1122 00:03:57.914693       1 controller.go:667] quota admission added evaluator for: namespaces
	I1122 00:03:58.256928       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.197.188"}
	I1122 00:03:58.288243       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.112.221"}
	
	
	==> kube-controller-manager [04cf4193002456e3aa12568d11c5337af3f25e972ef32773b772906e32177b19] <==
	I1122 00:01:07.169996       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1122 00:01:07.170099       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1122 00:01:07.171872       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1122 00:01:07.172227       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1122 00:01:07.173981       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1122 00:01:07.174640       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1122 00:01:07.180084       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1122 00:01:07.180466       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1122 00:01:07.184965       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1122 00:01:07.184982       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1122 00:01:07.185568       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1122 00:01:07.188346       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1122 00:01:07.191724       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1122 00:01:07.191803       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1122 00:01:07.191883       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-783762"
	I1122 00:01:07.191943       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1122 00:01:07.200160       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1122 00:01:07.206539       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1122 00:01:07.208792       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1122 00:01:07.212867       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1122 00:01:07.212941       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1122 00:01:07.212952       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1122 00:01:07.212957       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1122 00:01:07.215230       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1122 00:01:07.216946       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-controller-manager [622d332041e786701191f78a09bf73dd9406c9297fe6831bf955fa085b914c9d] <==
	I1122 00:01:53.915146       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1122 00:01:53.926745       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1122 00:01:53.927772       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1122 00:01:53.930209       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1122 00:01:53.935999       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1122 00:01:53.936102       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1122 00:01:53.936128       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1122 00:01:53.936196       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1122 00:01:53.937461       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1122 00:01:53.937508       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1122 00:01:53.937509       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1122 00:01:53.938858       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1122 00:01:53.940109       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1122 00:01:53.940123       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1122 00:01:53.946617       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1122 00:01:53.946660       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1122 00:01:53.946667       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	E1122 00:03:58.049870       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1122 00:03:58.062735       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1122 00:03:58.063775       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1122 00:03:58.079287       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1122 00:03:58.079327       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1122 00:03:58.090808       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1122 00:03:58.091966       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1122 00:03:58.101149       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [c0a6e5bbcecef81643a8b1f25289fb2a355f42d76ed00755be12d759e4376728] <==
	I1122 00:01:05.199636       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1122 00:01:05.299976       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1122 00:01:05.300123       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.76"]
	E1122 00:01:05.300255       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1122 00:01:05.355005       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1122 00:01:05.355111       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1122 00:01:05.355133       1 server_linux.go:132] "Using iptables Proxier"
	I1122 00:01:05.368603       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1122 00:01:05.369525       1 server.go:527] "Version info" version="v1.34.1"
	I1122 00:01:05.370098       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:01:05.378679       1 config.go:200] "Starting service config controller"
	I1122 00:01:05.379181       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1122 00:01:05.379281       1 config.go:106] "Starting endpoint slice config controller"
	I1122 00:01:05.379359       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1122 00:01:05.379373       1 config.go:403] "Starting serviceCIDR config controller"
	I1122 00:01:05.379377       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1122 00:01:05.381930       1 config.go:309] "Starting node config controller"
	I1122 00:01:05.382118       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1122 00:01:05.382145       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1122 00:01:05.479858       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1122 00:01:05.479996       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1122 00:01:05.479915       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [c0c435c405010db72805d64d746d7f7105f7f12029df399df5284c3e900e2773] <==
	E1122 00:01:50.503748       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes \"functional-783762\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1122 00:01:56.849670       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1122 00:01:56.849736       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.76"]
	E1122 00:01:56.849823       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1122 00:01:56.891422       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1122 00:01:56.891506       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1122 00:01:56.891532       1 server_linux.go:132] "Using iptables Proxier"
	I1122 00:01:56.902453       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1122 00:01:56.902815       1 server.go:527] "Version info" version="v1.34.1"
	I1122 00:01:56.902851       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:01:56.907904       1 config.go:200] "Starting service config controller"
	I1122 00:01:56.908112       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1122 00:01:56.908180       1 config.go:106] "Starting endpoint slice config controller"
	I1122 00:01:56.908187       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1122 00:01:56.908199       1 config.go:403] "Starting serviceCIDR config controller"
	I1122 00:01:56.908204       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1122 00:01:56.912411       1 config.go:309] "Starting node config controller"
	I1122 00:01:56.912542       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1122 00:01:56.912551       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1122 00:01:57.008980       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1122 00:01:57.009061       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1122 00:01:57.009135       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [ae7059d8a3af78a4a0c906c515245dc6d4414fc28d580cab2fddc21a5a3e5385] <==
	I1122 00:01:46.029707       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1122 00:01:46.029751       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:01:46.032102       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:01:46.032145       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:01:46.032850       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1122 00:01:46.032926       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1122 00:01:46.132551       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1122 00:01:50.422801       1 reflector.go:205] "Failed to watch" err="csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1122 00:01:50.422864       1 reflector.go:205] "Failed to watch" err="resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1122 00:01:50.422881       1 reflector.go:205] "Failed to watch" err="storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1122 00:01:50.422897       1 reflector.go:205] "Failed to watch" err="resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1122 00:01:50.422911       1 reflector.go:205] "Failed to watch" err="persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1122 00:01:50.422922       1 reflector.go:205] "Failed to watch" err="pods is forbidden: User \"system:kube-scheduler\" cannot watch resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1122 00:01:50.422930       1 reflector.go:205] "Failed to watch" err="nodes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1122 00:01:50.422942       1 reflector.go:205] "Failed to watch" err="csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1122 00:01:50.422956       1 reflector.go:205] "Failed to watch" err="volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1122 00:01:50.423168       1 reflector.go:205] "Failed to watch" err="replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1122 00:01:50.423205       1 reflector.go:205] "Failed to watch" err="statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1122 00:01:50.423216       1 reflector.go:205] "Failed to watch" err="namespaces is forbidden: User \"system:kube-scheduler\" cannot watch resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1122 00:01:50.423229       1 reflector.go:205] "Failed to watch" err="poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot watch resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1122 00:01:50.423244       1 reflector.go:205] "Failed to watch" err="services is forbidden: User \"system:kube-scheduler\" cannot watch resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1122 00:01:50.433368       1 reflector.go:205] "Failed to watch" err="csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1122 00:01:50.433490       1 reflector.go:205] "Failed to watch" err="replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1122 00:01:50.433535       1 reflector.go:205] "Failed to watch" err="persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1122 00:01:50.433558       1 reflector.go:205] "Failed to watch" err="deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	
	
	==> kube-scheduler [f35efea65afed5a14e33a117e8d2e86725efb1398a611e9c9ad4b66ff388490c] <==
	I1122 00:01:01.732896       1 serving.go:386] Generated self-signed cert in-memory
	W1122 00:01:03.754650       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1122 00:01:03.754673       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1122 00:01:03.754681       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1122 00:01:03.754686       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1122 00:01:03.832520       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1122 00:01:03.834121       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:01:03.840611       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1122 00:01:03.840695       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:01:03.841229       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:01:03.840710       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1122 00:01:03.949210       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:01:27.426479       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1122 00:01:27.426592       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1122 00:01:27.426611       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1122 00:01:27.426632       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:01:27.426820       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1122 00:01:27.426855       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Nov 22 00:07:27 functional-783762 kubelet[6792]: E1122 00:07:27.595744    6792 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ErrImagePull: \"fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-qhcz2" podUID="bad1e5d7-de2e-4b08-b406-2e87deac3a9c"
	Nov 22 00:07:35 functional-783762 kubelet[6792]: E1122 00:07:35.982424    6792 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763770055982209068  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177577}  inodes_used:{value:89}}"
	Nov 22 00:07:35 functional-783762 kubelet[6792]: E1122 00:07:35.982444    6792 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763770055982209068  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177577}  inodes_used:{value:89}}"
	Nov 22 00:07:40 functional-783762 kubelet[6792]: E1122 00:07:40.545755    6792 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-qhcz2" podUID="bad1e5d7-de2e-4b08-b406-2e87deac3a9c"
	Nov 22 00:07:45 functional-783762 kubelet[6792]: E1122 00:07:45.843402    6792 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod9b393547de92689fa32a14bea69079b0/crio-39e5a20696df9463955e23ed29895cc87e900159279313bc5090d724c85ccfdf: Error finding container 39e5a20696df9463955e23ed29895cc87e900159279313bc5090d724c85ccfdf: Status 404 returned error can't find the container with id 39e5a20696df9463955e23ed29895cc87e900159279313bc5090d724c85ccfdf
	Nov 22 00:07:45 functional-783762 kubelet[6792]: E1122 00:07:45.844459    6792 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod3381fa89-cdde-43bf-a38c-f140281f28af/crio-e4ec1069bc01988ab7cbadc862ca142d62f6817017f947a57a9667444a97f511: Error finding container e4ec1069bc01988ab7cbadc862ca142d62f6817017f947a57a9667444a97f511: Status 404 returned error can't find the container with id e4ec1069bc01988ab7cbadc862ca142d62f6817017f947a57a9667444a97f511
	Nov 22 00:07:45 functional-783762 kubelet[6792]: E1122 00:07:45.844920    6792 manager.go:1116] Failed to create existing container: /kubepods/burstable/podd6a6bb5d624dd9724d6436c47c57eec9/crio-6ff5789062c7b88d0e5c9915aa01dc4f6932528a6e55ad5e178dfd4d537a1b48: Error finding container 6ff5789062c7b88d0e5c9915aa01dc4f6932528a6e55ad5e178dfd4d537a1b48: Status 404 returned error can't find the container with id 6ff5789062c7b88d0e5c9915aa01dc4f6932528a6e55ad5e178dfd4d537a1b48
	Nov 22 00:07:45 functional-783762 kubelet[6792]: E1122 00:07:45.845508    6792 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod5017c5ab7e5f7612c69e258534ddf5e4/crio-e9301a0a5805e97cdf0f30e8f35504b79f69bab035854b5cb6172b3633607a5d: Error finding container e9301a0a5805e97cdf0f30e8f35504b79f69bab035854b5cb6172b3633607a5d: Status 404 returned error can't find the container with id e9301a0a5805e97cdf0f30e8f35504b79f69bab035854b5cb6172b3633607a5d
	Nov 22 00:07:45 functional-783762 kubelet[6792]: E1122 00:07:45.846144    6792 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod5cb72e66-14a4-4209-861c-be0707b73762/crio-a513bd37a85301e4f5b8ed3fdfc3c5d95ac0b6eb70bae7f406c98f01b2570972: Error finding container a513bd37a85301e4f5b8ed3fdfc3c5d95ac0b6eb70bae7f406c98f01b2570972: Status 404 returned error can't find the container with id a513bd37a85301e4f5b8ed3fdfc3c5d95ac0b6eb70bae7f406c98f01b2570972
	Nov 22 00:07:45 functional-783762 kubelet[6792]: E1122 00:07:45.986748    6792 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763770065985468466  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177577}  inodes_used:{value:89}}"
	Nov 22 00:07:45 functional-783762 kubelet[6792]: E1122 00:07:45.986778    6792 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763770065985468466  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177577}  inodes_used:{value:89}}"
	Nov 22 00:07:54 functional-783762 kubelet[6792]: E1122 00:07:54.546320    6792 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-qhcz2" podUID="bad1e5d7-de2e-4b08-b406-2e87deac3a9c"
	Nov 22 00:07:55 functional-783762 kubelet[6792]: E1122 00:07:55.989349    6792 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763770075988722688  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177577}  inodes_used:{value:89}}"
	Nov 22 00:07:55 functional-783762 kubelet[6792]: E1122 00:07:55.989398    6792 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763770075988722688  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177577}  inodes_used:{value:89}}"
	Nov 22 00:07:57 functional-783762 kubelet[6792]: E1122 00:07:57.691674    6792 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Nov 22 00:07:57 functional-783762 kubelet[6792]: E1122 00:07:57.691737    6792 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Nov 22 00:07:57 functional-783762 kubelet[6792]: E1122 00:07:57.691980    6792 kuberuntime_manager.go:1449] "Unhandled Error" err="container echo-server start failed in pod hello-node-75c85bcc94-2dc5f_default(db6d5163-56fa-41b6-9f55-3ebb021aa3c3): ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Nov 22 00:07:57 functional-783762 kubelet[6792]: E1122 00:07:57.692072    6792 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-2dc5f" podUID="db6d5163-56fa-41b6-9f55-3ebb021aa3c3"
	Nov 22 00:08:05 functional-783762 kubelet[6792]: E1122 00:08:05.991398    6792 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763770085990707480  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177577}  inodes_used:{value:89}}"
	Nov 22 00:08:05 functional-783762 kubelet[6792]: E1122 00:08:05.991453    6792 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763770085990707480  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177577}  inodes_used:{value:89}}"
	Nov 22 00:08:06 functional-783762 kubelet[6792]: E1122 00:08:06.545789    6792 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-qhcz2" podUID="bad1e5d7-de2e-4b08-b406-2e87deac3a9c"
	Nov 22 00:08:10 functional-783762 kubelet[6792]: E1122 00:08:10.542439    6792 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-2dc5f" podUID="db6d5163-56fa-41b6-9f55-3ebb021aa3c3"
	Nov 22 00:08:15 functional-783762 kubelet[6792]: E1122 00:08:15.994093    6792 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763770095993530959  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177577}  inodes_used:{value:89}}"
	Nov 22 00:08:15 functional-783762 kubelet[6792]: E1122 00:08:15.994124    6792 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763770095993530959  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:177577}  inodes_used:{value:89}}"
	Nov 22 00:08:22 functional-783762 kubelet[6792]: E1122 00:08:22.544717    6792 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-2dc5f" podUID="db6d5163-56fa-41b6-9f55-3ebb021aa3c3"
	
	
	==> storage-provisioner [bb5168f65cd008045da49893b96bed7e32efb8346d66a7d9d40934ae8e2e9d0e] <==
	W1122 00:07:57.190851       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:07:59.194819       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:07:59.202049       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:08:01.207004       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:08:01.216998       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:08:03.221133       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:08:03.227230       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:08:05.233479       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:08:05.244556       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:08:07.248735       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:08:07.258384       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:08:09.261592       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:08:09.268137       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:08:11.271996       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:08:11.277898       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:08:13.282383       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:08:13.288414       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:08:15.292927       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:08:15.302200       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:08:17.306633       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:08:17.312120       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:08:19.315558       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:08:19.324551       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:08:21.327497       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:08:21.333453       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [e3cfcf54044a53d7dd88d7db26f1e416d71caaab702bc46737a7be38cccdb604] <==
	I1122 00:01:42.079950       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1122 00:01:42.087124       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-783762 -n functional-783762
helpers_test.go:269: (dbg) Run:  kubectl --context functional-783762 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-2dc5f mysql-5bb876957f-qhcz2 sp-pod dashboard-metrics-scraper-77bf4d6c4c-xc2xc kubernetes-dashboard-855c9754f9-284lt
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-783762 describe pod busybox-mount hello-node-75c85bcc94-2dc5f mysql-5bb876957f-qhcz2 sp-pod dashboard-metrics-scraper-77bf4d6c4c-xc2xc kubernetes-dashboard-855c9754f9-284lt
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-783762 describe pod busybox-mount hello-node-75c85bcc94-2dc5f mysql-5bb876957f-qhcz2 sp-pod dashboard-metrics-scraper-77bf4d6c4c-xc2xc kubernetes-dashboard-855c9754f9-284lt: exit status 1 (112.169329ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-783762/192.168.39.76
	Start Time:       Sat, 22 Nov 2025 00:02:22 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  cri-o://95d552f310077dbb71aa5b22ef7be71d04c3592f8270330f3dc6f8195d61e047
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 22 Nov 2025 00:03:50 +0000
	      Finished:     Sat, 22 Nov 2025 00:03:50 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lhmwv (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-lhmwv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  6m1s   default-scheduler  Successfully assigned default/busybox-mount to functional-783762
	  Normal  Pulling    6m1s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     4m33s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.335s (1m28.082s including waiting). Image size: 4631262 bytes.
	  Normal  Created    4m33s  kubelet            Created container: mount-munger
	  Normal  Started    4m33s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-2dc5f
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-783762/192.168.39.76
	Start Time:       Sat, 22 Nov 2025 00:02:18 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:           10.244.0.9
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-svcql (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-svcql:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  6m5s                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-2dc5f to functional-783762
	  Normal   Pulling    3m10s (x3 over 6m5s)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     26s (x3 over 5m11s)   kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     26s (x3 over 5m11s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    1s (x4 over 5m11s)    kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     1s (x4 over 5m11s)    kubelet            Error: ImagePullBackOff
	
	
	Name:             mysql-5bb876957f-qhcz2
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-783762/192.168.39.76
	Start Time:       Sat, 22 Nov 2025 00:02:11 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dxdkj (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-dxdkj:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  6m12s                 default-scheduler  Successfully assigned default/mysql-5bb876957f-qhcz2 to functional-783762
	  Warning  Failed     4m3s (x2 over 5m41s)  kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     56s (x3 over 5m41s)   kubelet            Error: ErrImagePull
	  Warning  Failed     56s                   kubelet            Failed to pull image "docker.io/mysql:5.7": fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    17s (x5 over 5m41s)   kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     17s (x5 over 5m41s)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling    5s (x4 over 6m11s)    kubelet            Pulling image "docker.io/mysql:5.7"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-783762/192.168.39.76
	Start Time:       Sat, 22 Nov 2025 00:02:20 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zv6fz (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-zv6fz:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  6m3s                 default-scheduler  Successfully assigned default/sp-pod to functional-783762
	  Warning  Failed     4m35s                kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     92s (x2 over 4m35s)  kubelet            Error: ErrImagePull
	  Warning  Failed     92s                  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    79s (x2 over 4m35s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     79s (x2 over 4m35s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    66s (x3 over 6m2s)   kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-xc2xc" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-284lt" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-783762 describe pod busybox-mount hello-node-75c85bcc94-2dc5f mysql-5bb876957f-qhcz2 sp-pod dashboard-metrics-scraper-77bf4d6c4c-xc2xc kubernetes-dashboard-855c9754f9-284lt: exit status 1
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (370.74s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (602.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-783762 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-qhcz2" [bad1e5d7-de2e-4b08-b406-2e87deac3a9c] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:337: TestFunctional/parallel/MySQL: WARNING: pod list for "default" "app=mysql" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1804: ***** TestFunctional/parallel/MySQL: pod "app=mysql" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1804: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-783762 -n functional-783762
functional_test.go:1804: TestFunctional/parallel/MySQL: showing logs for failed pods as of 2025-11-22 00:12:11.341093621 +0000 UTC m=+1536.205385304
functional_test.go:1804: (dbg) Run:  kubectl --context functional-783762 describe po mysql-5bb876957f-qhcz2 -n default
functional_test.go:1804: (dbg) kubectl --context functional-783762 describe po mysql-5bb876957f-qhcz2 -n default:
Name:             mysql-5bb876957f-qhcz2
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-783762/192.168.39.76
Start Time:       Sat, 22 Nov 2025 00:02:11 +0000
Labels:           app=mysql
pod-template-hash=5bb876957f
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/mysql-5bb876957f
Containers:
mysql:
Container ID:   
Image:          docker.io/mysql:5.7
Image ID:       
Port:           3306/TCP (mysql)
Host Port:      0/TCP (mysql)
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Limits:
cpu:     700m
memory:  700Mi
Requests:
cpu:     600m
memory:  512Mi
Environment:
MYSQL_ROOT_PASSWORD:  password
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dxdkj (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-dxdkj:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/mysql-5bb876957f-qhcz2 to functional-783762
Warning  Failed     4m44s                 kubelet            Failed to pull image "docker.io/mysql:5.7": fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     103s (x3 over 9m29s)  kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     103s (x4 over 9m29s)  kubelet            Error: ErrImagePull
Normal   BackOff    28s (x11 over 9m29s)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
Warning  Failed     28s (x11 over 9m29s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    16s (x5 over 9m59s)   kubelet            Pulling image "docker.io/mysql:5.7"
functional_test.go:1804: (dbg) Run:  kubectl --context functional-783762 logs mysql-5bb876957f-qhcz2 -n default
functional_test.go:1804: (dbg) Non-zero exit: kubectl --context functional-783762 logs mysql-5bb876957f-qhcz2 -n default: exit status 1 (83.42432ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "mysql" in pod "mysql-5bb876957f-qhcz2" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1804: kubectl --context functional-783762 logs mysql-5bb876957f-qhcz2 -n default: exit status 1
functional_test.go:1806: failed waiting for mysql pod: app=mysql within 10m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/MySQL]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-783762 -n functional-783762
helpers_test.go:252: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-783762 logs -n 25: (1.517347577s)
helpers_test.go:260: TestFunctional/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                        ARGS                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount          │ -p functional-783762 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1089069711/001:/mount3 --alsologtostderr -v=1 │ functional-783762 │ jenkins │ v1.37.0 │ 22 Nov 25 00:03 UTC │                     │
	│ ssh            │ functional-783762 ssh findmnt -T /mount1                                                                           │ functional-783762 │ jenkins │ v1.37.0 │ 22 Nov 25 00:03 UTC │ 22 Nov 25 00:03 UTC │
	│ ssh            │ functional-783762 ssh findmnt -T /mount2                                                                           │ functional-783762 │ jenkins │ v1.37.0 │ 22 Nov 25 00:03 UTC │ 22 Nov 25 00:03 UTC │
	│ ssh            │ functional-783762 ssh findmnt -T /mount3                                                                           │ functional-783762 │ jenkins │ v1.37.0 │ 22 Nov 25 00:03 UTC │ 22 Nov 25 00:03 UTC │
	│ mount          │ -p functional-783762 --kill=true                                                                                   │ functional-783762 │ jenkins │ v1.37.0 │ 22 Nov 25 00:03 UTC │                     │
	│ start          │ -p functional-783762 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio            │ functional-783762 │ jenkins │ v1.37.0 │ 22 Nov 25 00:03 UTC │                     │
	│ start          │ -p functional-783762 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                      │ functional-783762 │ jenkins │ v1.37.0 │ 22 Nov 25 00:03 UTC │                     │
	│ start          │ -p functional-783762 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio            │ functional-783762 │ jenkins │ v1.37.0 │ 22 Nov 25 00:03 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-783762 --alsologtostderr -v=1                                                     │ functional-783762 │ jenkins │ v1.37.0 │ 22 Nov 25 00:03 UTC │                     │
	│ ssh            │ functional-783762 ssh sudo cat /etc/ssl/certs/250664.pem                                                           │ functional-783762 │ jenkins │ v1.37.0 │ 22 Nov 25 00:08 UTC │ 22 Nov 25 00:08 UTC │
	│ ssh            │ functional-783762 ssh sudo cat /usr/share/ca-certificates/250664.pem                                               │ functional-783762 │ jenkins │ v1.37.0 │ 22 Nov 25 00:08 UTC │ 22 Nov 25 00:08 UTC │
	│ ssh            │ functional-783762 ssh sudo cat /etc/ssl/certs/51391683.0                                                           │ functional-783762 │ jenkins │ v1.37.0 │ 22 Nov 25 00:08 UTC │ 22 Nov 25 00:08 UTC │
	│ ssh            │ functional-783762 ssh sudo cat /etc/ssl/certs/2506642.pem                                                          │ functional-783762 │ jenkins │ v1.37.0 │ 22 Nov 25 00:08 UTC │ 22 Nov 25 00:08 UTC │
	│ ssh            │ functional-783762 ssh sudo cat /usr/share/ca-certificates/2506642.pem                                              │ functional-783762 │ jenkins │ v1.37.0 │ 22 Nov 25 00:08 UTC │ 22 Nov 25 00:08 UTC │
	│ ssh            │ functional-783762 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                           │ functional-783762 │ jenkins │ v1.37.0 │ 22 Nov 25 00:08 UTC │ 22 Nov 25 00:08 UTC │
	│ image          │ functional-783762 image ls --format short --alsologtostderr                                                        │ functional-783762 │ jenkins │ v1.37.0 │ 22 Nov 25 00:08 UTC │ 22 Nov 25 00:08 UTC │
	│ image          │ functional-783762 image ls --format yaml --alsologtostderr                                                         │ functional-783762 │ jenkins │ v1.37.0 │ 22 Nov 25 00:08 UTC │ 22 Nov 25 00:08 UTC │
	│ ssh            │ functional-783762 ssh pgrep buildkitd                                                                              │ functional-783762 │ jenkins │ v1.37.0 │ 22 Nov 25 00:08 UTC │                     │
	│ image          │ functional-783762 image build -t localhost/my-image:functional-783762 testdata/build --alsologtostderr             │ functional-783762 │ jenkins │ v1.37.0 │ 22 Nov 25 00:08 UTC │ 22 Nov 25 00:08 UTC │
	│ image          │ functional-783762 image ls                                                                                         │ functional-783762 │ jenkins │ v1.37.0 │ 22 Nov 25 00:08 UTC │ 22 Nov 25 00:08 UTC │
	│ image          │ functional-783762 image ls --format json --alsologtostderr                                                         │ functional-783762 │ jenkins │ v1.37.0 │ 22 Nov 25 00:08 UTC │ 22 Nov 25 00:08 UTC │
	│ image          │ functional-783762 image ls --format table --alsologtostderr                                                        │ functional-783762 │ jenkins │ v1.37.0 │ 22 Nov 25 00:08 UTC │ 22 Nov 25 00:08 UTC │
	│ update-context │ functional-783762 update-context --alsologtostderr -v=2                                                            │ functional-783762 │ jenkins │ v1.37.0 │ 22 Nov 25 00:08 UTC │ 22 Nov 25 00:08 UTC │
	│ update-context │ functional-783762 update-context --alsologtostderr -v=2                                                            │ functional-783762 │ jenkins │ v1.37.0 │ 22 Nov 25 00:08 UTC │ 22 Nov 25 00:08 UTC │
	│ update-context │ functional-783762 update-context --alsologtostderr -v=2                                                            │ functional-783762 │ jenkins │ v1.37.0 │ 22 Nov 25 00:08 UTC │ 22 Nov 25 00:08 UTC │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/22 00:03:56
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1122 00:03:56.869086  260652 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:03:56.869638  260652 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:03:56.869656  260652 out.go:374] Setting ErrFile to fd 2...
	I1122 00:03:56.869662  260652 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:03:56.870298  260652 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-244751/.minikube/bin
	I1122 00:03:56.871056  260652 out.go:368] Setting JSON to false
	I1122 00:03:56.872195  260652 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":27965,"bootTime":1763741872,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1122 00:03:56.872344  260652 start.go:143] virtualization: kvm guest
	I1122 00:03:56.874227  260652 out.go:179] * [functional-783762] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1122 00:03:56.875684  260652 out.go:179]   - MINIKUBE_LOCATION=21934
	I1122 00:03:56.875758  260652 notify.go:221] Checking for updates...
	I1122 00:03:56.878576  260652 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 00:03:56.880113  260652 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-244751/kubeconfig
	I1122 00:03:56.884960  260652 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-244751/.minikube
	I1122 00:03:56.886574  260652 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1122 00:03:56.887970  260652 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 00:03:56.889736  260652 config.go:182] Loaded profile config "functional-783762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:03:56.890303  260652 driver.go:422] Setting default libvirt URI to qemu:///system
	I1122 00:03:56.921947  260652 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1122 00:03:56.923317  260652 start.go:309] selected driver: kvm2
	I1122 00:03:56.923337  260652 start.go:930] validating driver "kvm2" against &{Name:functional-783762 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-783762 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.76 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:03:56.923483  260652 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 00:03:56.925829  260652 out.go:203] 
	W1122 00:03:56.927360  260652 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1122 00:03:56.928728  260652 out.go:203] 
	
	
	==> CRI-O <==
	Nov 22 00:12:12 functional-783762 crio[5816]: time="2025-11-22 00:12:12.196466318Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763770332196436834,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:203237,},InodesUsed:&UInt64Value{Value:105,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3e323d4a-56aa-452a-867a-fac26d183707 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 22 00:12:12 functional-783762 crio[5816]: time="2025-11-22 00:12:12.197537375Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=99add8cc-c1ff-42fc-98b7-76d562622a45 name=/runtime.v1.RuntimeService/ListContainers
	Nov 22 00:12:12 functional-783762 crio[5816]: time="2025-11-22 00:12:12.197596292Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=99add8cc-c1ff-42fc-98b7-76d562622a45 name=/runtime.v1.RuntimeService/ListContainers
	Nov 22 00:12:12 functional-783762 crio[5816]: time="2025-11-22 00:12:12.197916861Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:95d552f310077dbb71aa5b22ef7be71d04c3592f8270330f3dc6f8195d61e047,PodSandboxId:2c249bc4e15ab7b56cf51f9078096523a863e77b8bd9e308dd5311716682c55d,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1763769830827400746,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 47d8e8ac-6e19-45c9-a495-6a6b6848a8e4,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26451e4d2e6ccdca80e0db009442a15061a408c0cba565df9e5041bf158049c3,PodSandboxId:93fa5fdbb6fc28f05008e5c01c96f3db881614c1406ddf48130fa76b31fc64be,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1763769732046845788,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-7d85dfc575-cjd8h,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4abfb066-72db-4ce1-8963-25d7bee932a0,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d861ebb3bc7a4b2eef3ae0f5920853815e45e2f5a7c904c15da9579bf1ed8354,PodSandboxId:189d6f9812ba4d12f4fd6992624eeaa7ac80f8f3025d87e553934a3133f6a9a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763769707973235782,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59710b70e8d30db93937d9019eab0a9a,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name
\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:937b2a1e9b101160e7748ede304ccb8d1525f30ab7403c6da8c9e069f1e199d0,PodSandboxId:189d6f9812ba4d12f4fd6992624eeaa7ac80f8f3025d87e553934a3133f6a9a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1763769706921898308,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59710b70e8d30db93937d9019eab0a9a,},A
nnotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb5168f65cd008045da49893b96bed7e32efb8346d66a7d9d40934ae8e2e9d0e,PodSandboxId:5ba6b9ce4a22a38fa5e58daab9cadfb75bbb49799335ea6f577faa2ddad865e0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763769706766851201,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisi
oner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00ac7e70-b6f1-4991-862a-db0ba26baf6c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fa18a4bc25e43ee3b48de6de00a157e952e7608951c944d746541a037c7d494,PodSandboxId:a8234273d863ee2cc98ae5cd8f82dc2da86843090e62e58decc21cdf13c668b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763769706814494400,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4hlw7,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: 5cb72e66-14a4-4209-861c-be0707b73762,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:622d332041e786701191f78a09bf73dd9406c9297fe6831bf955fa085b914c9d,PodSandboxId:df91e6302180a62a7b6581045d8002297595b188ee7a94262ba4105754685773,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e5
61a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763769702074990416,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6a6bb5d624dd9724d6436c47c57eec9,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae7059d8a3af78a4a0c906c515245dc6d4414fc28d580cab2fddc21a5a3e5385,PodSandboxId:0f7dd710594cad5757444e9682b0fa16f3e34d7039a34fd992c885d80b8d25
8e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763769701977969247,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b393547de92689fa32a14bea69079b0,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0c435c405010db72805d64d746d7f
7105f7f12029df399df5284c3e900e2773,PodSandboxId:5bb0fd666e6bd9751e9fdfd576d9503c08d73449309add09c5734a2386ef52a1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763769701785735881,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6cqt7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3381fa89-cdde-43bf-a38c-f140281f28af,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67c5ba38a97236a5f6237635c69b5123297b3dad277fc5793f3173862464829
d,PodSandboxId:8ba98d2cdc1e383d4c406e30f212b0ab0d901dada207b961499db4ed4d1e26df,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763769701716886607,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5017c5ab7e5f7612c69e258534ddf5e4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},}
,&Container{Id:e3cfcf54044a53d7dd88d7db26f1e416d71caaab702bc46737a7be38cccdb604,PodSandboxId:5ba6b9ce4a22a38fa5e58daab9cadfb75bbb49799335ea6f577faa2ddad865e0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1763769701689668484,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00ac7e70-b6f1-4991-862a-db0ba26baf6c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{I
d:c0a6e5bbcecef81643a8b1f25289fb2a355f42d76ed00755be12d759e4376728,PodSandboxId:e4ec1069bc01988ab7cbadc862ca142d62f6817017f947a57a9667444a97f511,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1763769664906347403,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6cqt7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3381fa89-cdde-43bf-a38c-f140281f28af,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04cf4193002456e3aa12568d11c5337a
f3f25e972ef32773b772906e32177b19,PodSandboxId:6ff5789062c7b88d0e5c9915aa01dc4f6932528a6e55ad5e178dfd4d537a1b48,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1763769660274653220,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6a6bb5d624dd9724d6436c47c57eec9,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f35efea65afed5a14e33a117e8d2e86725efb1398a611e9c9ad4b66ff388490c,PodSandboxId:39e5a20696df9463955e23ed29895cc87e900159279313bc5090d724c85ccfdf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1763769660277144916,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b393547de92689fa32a14bea69079b0,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.
container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5219aacb2ecc8f9ab6b7a4db8a50a890fedaa0848f7b8ec907cbcacfc3c6344,PodSandboxId:e9301a0a5805e97cdf0f30e8f35504b79f69bab035854b5cb6172b3633607a5d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1763769660245120828,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5017c5ab7e5f7612c69e258534ddf5e4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\
"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2853482cd778dbcfedab3ca5c70c06d4b54b2ac79c80e85fa4d6427fb51d475f,PodSandboxId:a513bd37a85301e4f5b8ed3fdfc3c5d95ac0b6eb70bae7f406c98f01b2570972,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1763769643913271123,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4hlw7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cb72e66-14a4-4209-861c-be0707b73762,},Annotations:map[string]string{io.kuber
netes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=99add8cc-c1ff-42fc-98b7-76d562622a45 name=/runtime.v1.RuntimeService/ListContainers
	Nov 22 00:12:12 functional-783762 crio[5816]: time="2025-11-22 00:12:12.242830364Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9ee61064-2863-4ec0-9876-98121946590c name=/runtime.v1.RuntimeService/Version
	Nov 22 00:12:12 functional-783762 crio[5816]: time="2025-11-22 00:12:12.242908550Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9ee61064-2863-4ec0-9876-98121946590c name=/runtime.v1.RuntimeService/Version
	Nov 22 00:12:12 functional-783762 crio[5816]: time="2025-11-22 00:12:12.244911222Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d3370f4f-b876-48f8-b124-9ace9069edc4 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 22 00:12:12 functional-783762 crio[5816]: time="2025-11-22 00:12:12.245989474Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763770332245930946,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:203237,},InodesUsed:&UInt64Value{Value:105,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d3370f4f-b876-48f8-b124-9ace9069edc4 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 22 00:12:12 functional-783762 crio[5816]: time="2025-11-22 00:12:12.247349374Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0ce33753-e5a5-4236-9c04-79faae290d0d name=/runtime.v1.RuntimeService/ListContainers
	Nov 22 00:12:12 functional-783762 crio[5816]: time="2025-11-22 00:12:12.247538167Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0ce33753-e5a5-4236-9c04-79faae290d0d name=/runtime.v1.RuntimeService/ListContainers
	Nov 22 00:12:12 functional-783762 crio[5816]: time="2025-11-22 00:12:12.248290360Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:95d552f310077dbb71aa5b22ef7be71d04c3592f8270330f3dc6f8195d61e047,PodSandboxId:2c249bc4e15ab7b56cf51f9078096523a863e77b8bd9e308dd5311716682c55d,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1763769830827400746,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 47d8e8ac-6e19-45c9-a495-6a6b6848a8e4,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26451e4d2e6ccdca80e0db009442a15061a408c0cba565df9e5041bf158049c3,PodSandboxId:93fa5fdbb6fc28f05008e5c01c96f3db881614c1406ddf48130fa76b31fc64be,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1763769732046845788,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-7d85dfc575-cjd8h,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4abfb066-72db-4ce1-8963-25d7bee932a0,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d861ebb3bc7a4b2eef3ae0f5920853815e45e2f5a7c904c15da9579bf1ed8354,PodSandboxId:189d6f9812ba4d12f4fd6992624eeaa7ac80f8f3025d87e553934a3133f6a9a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763769707973235782,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59710b70e8d30db93937d9019eab0a9a,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name
\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:937b2a1e9b101160e7748ede304ccb8d1525f30ab7403c6da8c9e069f1e199d0,PodSandboxId:189d6f9812ba4d12f4fd6992624eeaa7ac80f8f3025d87e553934a3133f6a9a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1763769706921898308,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59710b70e8d30db93937d9019eab0a9a,},A
nnotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb5168f65cd008045da49893b96bed7e32efb8346d66a7d9d40934ae8e2e9d0e,PodSandboxId:5ba6b9ce4a22a38fa5e58daab9cadfb75bbb49799335ea6f577faa2ddad865e0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763769706766851201,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisi
oner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00ac7e70-b6f1-4991-862a-db0ba26baf6c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fa18a4bc25e43ee3b48de6de00a157e952e7608951c944d746541a037c7d494,PodSandboxId:a8234273d863ee2cc98ae5cd8f82dc2da86843090e62e58decc21cdf13c668b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763769706814494400,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4hlw7,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: 5cb72e66-14a4-4209-861c-be0707b73762,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:622d332041e786701191f78a09bf73dd9406c9297fe6831bf955fa085b914c9d,PodSandboxId:df91e6302180a62a7b6581045d8002297595b188ee7a94262ba4105754685773,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e5
61a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763769702074990416,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6a6bb5d624dd9724d6436c47c57eec9,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae7059d8a3af78a4a0c906c515245dc6d4414fc28d580cab2fddc21a5a3e5385,PodSandboxId:0f7dd710594cad5757444e9682b0fa16f3e34d7039a34fd992c885d80b8d25
8e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763769701977969247,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b393547de92689fa32a14bea69079b0,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0c435c405010db72805d64d746d7f
7105f7f12029df399df5284c3e900e2773,PodSandboxId:5bb0fd666e6bd9751e9fdfd576d9503c08d73449309add09c5734a2386ef52a1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763769701785735881,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6cqt7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3381fa89-cdde-43bf-a38c-f140281f28af,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67c5ba38a97236a5f6237635c69b5123297b3dad277fc5793f3173862464829
d,PodSandboxId:8ba98d2cdc1e383d4c406e30f212b0ab0d901dada207b961499db4ed4d1e26df,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763769701716886607,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5017c5ab7e5f7612c69e258534ddf5e4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},}
,&Container{Id:e3cfcf54044a53d7dd88d7db26f1e416d71caaab702bc46737a7be38cccdb604,PodSandboxId:5ba6b9ce4a22a38fa5e58daab9cadfb75bbb49799335ea6f577faa2ddad865e0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1763769701689668484,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00ac7e70-b6f1-4991-862a-db0ba26baf6c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{I
d:c0a6e5bbcecef81643a8b1f25289fb2a355f42d76ed00755be12d759e4376728,PodSandboxId:e4ec1069bc01988ab7cbadc862ca142d62f6817017f947a57a9667444a97f511,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1763769664906347403,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6cqt7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3381fa89-cdde-43bf-a38c-f140281f28af,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04cf4193002456e3aa12568d11c5337a
f3f25e972ef32773b772906e32177b19,PodSandboxId:6ff5789062c7b88d0e5c9915aa01dc4f6932528a6e55ad5e178dfd4d537a1b48,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1763769660274653220,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6a6bb5d624dd9724d6436c47c57eec9,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f35efea65afed5a14e33a117e8d2e86725efb1398a611e9c9ad4b66ff388490c,PodSandboxId:39e5a20696df9463955e23ed29895cc87e900159279313bc5090d724c85ccfdf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1763769660277144916,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b393547de92689fa32a14bea69079b0,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.
container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5219aacb2ecc8f9ab6b7a4db8a50a890fedaa0848f7b8ec907cbcacfc3c6344,PodSandboxId:e9301a0a5805e97cdf0f30e8f35504b79f69bab035854b5cb6172b3633607a5d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1763769660245120828,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5017c5ab7e5f7612c69e258534ddf5e4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\
"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2853482cd778dbcfedab3ca5c70c06d4b54b2ac79c80e85fa4d6427fb51d475f,PodSandboxId:a513bd37a85301e4f5b8ed3fdfc3c5d95ac0b6eb70bae7f406c98f01b2570972,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1763769643913271123,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4hlw7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cb72e66-14a4-4209-861c-be0707b73762,},Annotations:map[string]string{io.kuber
netes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0ce33753-e5a5-4236-9c04-79faae290d0d name=/runtime.v1.RuntimeService/ListContainers
	Nov 22 00:12:12 functional-783762 crio[5816]: time="2025-11-22 00:12:12.282326038Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b1c4fd26-ba43-4817-aae6-1a81dcedf9ba name=/runtime.v1.RuntimeService/Version
	Nov 22 00:12:12 functional-783762 crio[5816]: time="2025-11-22 00:12:12.282862996Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b1c4fd26-ba43-4817-aae6-1a81dcedf9ba name=/runtime.v1.RuntimeService/Version
	Nov 22 00:12:12 functional-783762 crio[5816]: time="2025-11-22 00:12:12.284883875Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4734e9ee-23be-487e-aa97-0b2e6e187944 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 22 00:12:12 functional-783762 crio[5816]: time="2025-11-22 00:12:12.285935315Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763770332285910694,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:203237,},InodesUsed:&UInt64Value{Value:105,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4734e9ee-23be-487e-aa97-0b2e6e187944 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 22 00:12:12 functional-783762 crio[5816]: time="2025-11-22 00:12:12.286844513Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c4ad682a-ed15-41cf-bcb8-4e89b0261f7c name=/runtime.v1.RuntimeService/ListContainers
	Nov 22 00:12:12 functional-783762 crio[5816]: time="2025-11-22 00:12:12.287118732Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c4ad682a-ed15-41cf-bcb8-4e89b0261f7c name=/runtime.v1.RuntimeService/ListContainers
	Nov 22 00:12:12 functional-783762 crio[5816]: time="2025-11-22 00:12:12.287624145Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:95d552f310077dbb71aa5b22ef7be71d04c3592f8270330f3dc6f8195d61e047,PodSandboxId:2c249bc4e15ab7b56cf51f9078096523a863e77b8bd9e308dd5311716682c55d,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1763769830827400746,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 47d8e8ac-6e19-45c9-a495-6a6b6848a8e4,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26451e4d2e6ccdca80e0db009442a15061a408c0cba565df9e5041bf158049c3,PodSandboxId:93fa5fdbb6fc28f05008e5c01c96f3db881614c1406ddf48130fa76b31fc64be,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1763769732046845788,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-7d85dfc575-cjd8h,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4abfb066-72db-4ce1-8963-25d7bee932a0,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d861ebb3bc7a4b2eef3ae0f5920853815e45e2f5a7c904c15da9579bf1ed8354,PodSandboxId:189d6f9812ba4d12f4fd6992624eeaa7ac80f8f3025d87e553934a3133f6a9a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763769707973235782,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59710b70e8d30db93937d9019eab0a9a,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name
\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:937b2a1e9b101160e7748ede304ccb8d1525f30ab7403c6da8c9e069f1e199d0,PodSandboxId:189d6f9812ba4d12f4fd6992624eeaa7ac80f8f3025d87e553934a3133f6a9a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1763769706921898308,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59710b70e8d30db93937d9019eab0a9a,},A
nnotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb5168f65cd008045da49893b96bed7e32efb8346d66a7d9d40934ae8e2e9d0e,PodSandboxId:5ba6b9ce4a22a38fa5e58daab9cadfb75bbb49799335ea6f577faa2ddad865e0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763769706766851201,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisi
oner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00ac7e70-b6f1-4991-862a-db0ba26baf6c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fa18a4bc25e43ee3b48de6de00a157e952e7608951c944d746541a037c7d494,PodSandboxId:a8234273d863ee2cc98ae5cd8f82dc2da86843090e62e58decc21cdf13c668b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763769706814494400,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4hlw7,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: 5cb72e66-14a4-4209-861c-be0707b73762,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:622d332041e786701191f78a09bf73dd9406c9297fe6831bf955fa085b914c9d,PodSandboxId:df91e6302180a62a7b6581045d8002297595b188ee7a94262ba4105754685773,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e5
61a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763769702074990416,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6a6bb5d624dd9724d6436c47c57eec9,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae7059d8a3af78a4a0c906c515245dc6d4414fc28d580cab2fddc21a5a3e5385,PodSandboxId:0f7dd710594cad5757444e9682b0fa16f3e34d7039a34fd992c885d80b8d25
8e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763769701977969247,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b393547de92689fa32a14bea69079b0,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0c435c405010db72805d64d746d7f
7105f7f12029df399df5284c3e900e2773,PodSandboxId:5bb0fd666e6bd9751e9fdfd576d9503c08d73449309add09c5734a2386ef52a1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763769701785735881,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6cqt7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3381fa89-cdde-43bf-a38c-f140281f28af,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67c5ba38a97236a5f6237635c69b5123297b3dad277fc5793f3173862464829
d,PodSandboxId:8ba98d2cdc1e383d4c406e30f212b0ab0d901dada207b961499db4ed4d1e26df,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763769701716886607,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5017c5ab7e5f7612c69e258534ddf5e4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},}
,&Container{Id:e3cfcf54044a53d7dd88d7db26f1e416d71caaab702bc46737a7be38cccdb604,PodSandboxId:5ba6b9ce4a22a38fa5e58daab9cadfb75bbb49799335ea6f577faa2ddad865e0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1763769701689668484,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00ac7e70-b6f1-4991-862a-db0ba26baf6c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{I
d:c0a6e5bbcecef81643a8b1f25289fb2a355f42d76ed00755be12d759e4376728,PodSandboxId:e4ec1069bc01988ab7cbadc862ca142d62f6817017f947a57a9667444a97f511,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1763769664906347403,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6cqt7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3381fa89-cdde-43bf-a38c-f140281f28af,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04cf4193002456e3aa12568d11c5337a
f3f25e972ef32773b772906e32177b19,PodSandboxId:6ff5789062c7b88d0e5c9915aa01dc4f6932528a6e55ad5e178dfd4d537a1b48,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1763769660274653220,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6a6bb5d624dd9724d6436c47c57eec9,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f35efea65afed5a14e33a117e8d2e86725efb1398a611e9c9ad4b66ff388490c,PodSandboxId:39e5a20696df9463955e23ed29895cc87e900159279313bc5090d724c85ccfdf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1763769660277144916,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b393547de92689fa32a14bea69079b0,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.
container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5219aacb2ecc8f9ab6b7a4db8a50a890fedaa0848f7b8ec907cbcacfc3c6344,PodSandboxId:e9301a0a5805e97cdf0f30e8f35504b79f69bab035854b5cb6172b3633607a5d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1763769660245120828,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5017c5ab7e5f7612c69e258534ddf5e4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\
"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2853482cd778dbcfedab3ca5c70c06d4b54b2ac79c80e85fa4d6427fb51d475f,PodSandboxId:a513bd37a85301e4f5b8ed3fdfc3c5d95ac0b6eb70bae7f406c98f01b2570972,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1763769643913271123,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4hlw7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cb72e66-14a4-4209-861c-be0707b73762,},Annotations:map[string]string{io.kuber
netes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c4ad682a-ed15-41cf-bcb8-4e89b0261f7c name=/runtime.v1.RuntimeService/ListContainers
	Nov 22 00:12:12 functional-783762 crio[5816]: time="2025-11-22 00:12:12.331803963Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a28f5314-de0e-4ed4-b74c-4caa54501d9b name=/runtime.v1.RuntimeService/Version
	Nov 22 00:12:12 functional-783762 crio[5816]: time="2025-11-22 00:12:12.332257015Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a28f5314-de0e-4ed4-b74c-4caa54501d9b name=/runtime.v1.RuntimeService/Version
	Nov 22 00:12:12 functional-783762 crio[5816]: time="2025-11-22 00:12:12.333840045Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7133552f-d0cf-4555-9b35-3bd5fd39a00c name=/runtime.v1.ImageService/ImageFsInfo
	Nov 22 00:12:12 functional-783762 crio[5816]: time="2025-11-22 00:12:12.335449647Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763770332335367457,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:203237,},InodesUsed:&UInt64Value{Value:105,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7133552f-d0cf-4555-9b35-3bd5fd39a00c name=/runtime.v1.ImageService/ImageFsInfo
	Nov 22 00:12:12 functional-783762 crio[5816]: time="2025-11-22 00:12:12.337130149Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4a30a9de-89dc-4b17-a1f0-0d2547212efc name=/runtime.v1.RuntimeService/ListContainers
	Nov 22 00:12:12 functional-783762 crio[5816]: time="2025-11-22 00:12:12.337269758Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4a30a9de-89dc-4b17-a1f0-0d2547212efc name=/runtime.v1.RuntimeService/ListContainers
	Nov 22 00:12:12 functional-783762 crio[5816]: time="2025-11-22 00:12:12.337652000Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:95d552f310077dbb71aa5b22ef7be71d04c3592f8270330f3dc6f8195d61e047,PodSandboxId:2c249bc4e15ab7b56cf51f9078096523a863e77b8bd9e308dd5311716682c55d,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1763769830827400746,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 47d8e8ac-6e19-45c9-a495-6a6b6848a8e4,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26451e4d2e6ccdca80e0db009442a15061a408c0cba565df9e5041bf158049c3,PodSandboxId:93fa5fdbb6fc28f05008e5c01c96f3db881614c1406ddf48130fa76b31fc64be,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1763769732046845788,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-7d85dfc575-cjd8h,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4abfb066-72db-4ce1-8963-25d7bee932a0,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d861ebb3bc7a4b2eef3ae0f5920853815e45e2f5a7c904c15da9579bf1ed8354,PodSandboxId:189d6f9812ba4d12f4fd6992624eeaa7ac80f8f3025d87e553934a3133f6a9a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763769707973235782,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59710b70e8d30db93937d9019eab0a9a,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name
\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:937b2a1e9b101160e7748ede304ccb8d1525f30ab7403c6da8c9e069f1e199d0,PodSandboxId:189d6f9812ba4d12f4fd6992624eeaa7ac80f8f3025d87e553934a3133f6a9a6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1763769706921898308,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59710b70e8d30db93937d9019eab0a9a,},A
nnotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb5168f65cd008045da49893b96bed7e32efb8346d66a7d9d40934ae8e2e9d0e,PodSandboxId:5ba6b9ce4a22a38fa5e58daab9cadfb75bbb49799335ea6f577faa2ddad865e0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763769706766851201,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisi
oner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00ac7e70-b6f1-4991-862a-db0ba26baf6c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fa18a4bc25e43ee3b48de6de00a157e952e7608951c944d746541a037c7d494,PodSandboxId:a8234273d863ee2cc98ae5cd8f82dc2da86843090e62e58decc21cdf13c668b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763769706814494400,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4hlw7,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: 5cb72e66-14a4-4209-861c-be0707b73762,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:622d332041e786701191f78a09bf73dd9406c9297fe6831bf955fa085b914c9d,PodSandboxId:df91e6302180a62a7b6581045d8002297595b188ee7a94262ba4105754685773,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e5
61a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763769702074990416,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6a6bb5d624dd9724d6436c47c57eec9,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae7059d8a3af78a4a0c906c515245dc6d4414fc28d580cab2fddc21a5a3e5385,PodSandboxId:0f7dd710594cad5757444e9682b0fa16f3e34d7039a34fd992c885d80b8d25
8e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763769701977969247,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b393547de92689fa32a14bea69079b0,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0c435c405010db72805d64d746d7f
7105f7f12029df399df5284c3e900e2773,PodSandboxId:5bb0fd666e6bd9751e9fdfd576d9503c08d73449309add09c5734a2386ef52a1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763769701785735881,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6cqt7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3381fa89-cdde-43bf-a38c-f140281f28af,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67c5ba38a97236a5f6237635c69b5123297b3dad277fc5793f3173862464829
d,PodSandboxId:8ba98d2cdc1e383d4c406e30f212b0ab0d901dada207b961499db4ed4d1e26df,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763769701716886607,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5017c5ab7e5f7612c69e258534ddf5e4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},}
,&Container{Id:e3cfcf54044a53d7dd88d7db26f1e416d71caaab702bc46737a7be38cccdb604,PodSandboxId:5ba6b9ce4a22a38fa5e58daab9cadfb75bbb49799335ea6f577faa2ddad865e0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1763769701689668484,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00ac7e70-b6f1-4991-862a-db0ba26baf6c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{I
d:c0a6e5bbcecef81643a8b1f25289fb2a355f42d76ed00755be12d759e4376728,PodSandboxId:e4ec1069bc01988ab7cbadc862ca142d62f6817017f947a57a9667444a97f511,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1763769664906347403,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6cqt7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3381fa89-cdde-43bf-a38c-f140281f28af,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04cf4193002456e3aa12568d11c5337a
f3f25e972ef32773b772906e32177b19,PodSandboxId:6ff5789062c7b88d0e5c9915aa01dc4f6932528a6e55ad5e178dfd4d537a1b48,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1763769660274653220,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6a6bb5d624dd9724d6436c47c57eec9,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f35efea65afed5a14e33a117e8d2e86725efb1398a611e9c9ad4b66ff388490c,PodSandboxId:39e5a20696df9463955e23ed29895cc87e900159279313bc5090d724c85ccfdf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1763769660277144916,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b393547de92689fa32a14bea69079b0,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.
container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5219aacb2ecc8f9ab6b7a4db8a50a890fedaa0848f7b8ec907cbcacfc3c6344,PodSandboxId:e9301a0a5805e97cdf0f30e8f35504b79f69bab035854b5cb6172b3633607a5d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1763769660245120828,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-783762,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5017c5ab7e5f7612c69e258534ddf5e4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\
"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2853482cd778dbcfedab3ca5c70c06d4b54b2ac79c80e85fa4d6427fb51d475f,PodSandboxId:a513bd37a85301e4f5b8ed3fdfc3c5d95ac0b6eb70bae7f406c98f01b2570972,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1763769643913271123,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4hlw7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cb72e66-14a4-4209-861c-be0707b73762,},Annotations:map[string]string{io.kuber
netes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4a30a9de-89dc-4b17-a1f0-0d2547212efc name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	95d552f310077       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e     8 minutes ago       Exited              mount-munger              0                   2c249bc4e15ab       busybox-mount                               default
	26451e4d2e6cc       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6   10 minutes ago      Running             echo-server               0                   93fa5fdbb6fc2       hello-node-connect-7d85dfc575-cjd8h         default
	d861ebb3bc7a4       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                        10 minutes ago      Running             kube-apiserver            1                   189d6f9812ba4       kube-apiserver-functional-783762            kube-system
	937b2a1e9b101       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                        10 minutes ago      Exited              kube-apiserver            0                   189d6f9812ba4       kube-apiserver-functional-783762            kube-system
	3fa18a4bc25e4       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                        10 minutes ago      Running             coredns                   2                   a8234273d863e       coredns-66bc5c9577-4hlw7                    kube-system
	bb5168f65cd00       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                        10 minutes ago      Running             storage-provisioner       5                   5ba6b9ce4a22a       storage-provisioner                         kube-system
	622d332041e78       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                        10 minutes ago      Running             kube-controller-manager   3                   df91e6302180a       kube-controller-manager-functional-783762   kube-system
	ae7059d8a3af7       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                        10 minutes ago      Running             kube-scheduler            3                   0f7dd710594ca       kube-scheduler-functional-783762            kube-system
	c0c435c405010       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                        10 minutes ago      Running             kube-proxy                3                   5bb0fd666e6bd       kube-proxy-6cqt7                            kube-system
	67c5ba38a9723       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                        10 minutes ago      Running             etcd                      3                   8ba98d2cdc1e3       etcd-functional-783762                      kube-system
	e3cfcf54044a5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                        10 minutes ago      Exited              storage-provisioner       4                   5ba6b9ce4a22a       storage-provisioner                         kube-system
	c0a6e5bbcecef       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                        11 minutes ago      Exited              kube-proxy                2                   e4ec1069bc019       kube-proxy-6cqt7                            kube-system
	f35efea65afed       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                        11 minutes ago      Exited              kube-scheduler            2                   39e5a20696df9       kube-scheduler-functional-783762            kube-system
	04cf419300245       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                        11 minutes ago      Exited              kube-controller-manager   2                   6ff5789062c7b       kube-controller-manager-functional-783762   kube-system
	b5219aacb2ecc       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                        11 minutes ago      Exited              etcd                      2                   e9301a0a5805e       etcd-functional-783762                      kube-system
	2853482cd778d       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                        11 minutes ago      Exited              coredns                   1                   a513bd37a8530       coredns-66bc5c9577-4hlw7                    kube-system
	
	
	==> coredns [2853482cd778dbcfedab3ca5c70c06d4b54b2ac79c80e85fa4d6427fb51d475f] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56778 - 24880 "HINFO IN 510444860572811029.660604583740510837. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.498605514s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [3fa18a4bc25e43ee3b48de6de00a157e952e7608951c944d746541a037c7d494] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: watch of *v1.Namespace ended with: very short watch: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: Unexpected watch close - watch lasted less than a second and no items received
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: watch of *v1.Service ended with: very short watch: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: Unexpected watch close - watch lasted less than a second and no items received
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: watch of *v1.EndpointSlice ended with: very short watch: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: Unexpected watch close - watch lasted less than a second and no items received
	[INFO] 127.0.0.1:46548 - 19382 "HINFO IN 3244271735892804347.7610011136115581116. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.855078147s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               functional-783762
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-783762
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785
	                    minikube.k8s.io/name=functional-783762
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_22T00_00_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 22 Nov 2025 00:00:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-783762
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 22 Nov 2025 00:12:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 22 Nov 2025 00:08:55 +0000   Sat, 22 Nov 2025 00:00:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 22 Nov 2025 00:08:55 +0000   Sat, 22 Nov 2025 00:00:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 22 Nov 2025 00:08:55 +0000   Sat, 22 Nov 2025 00:00:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 22 Nov 2025 00:08:55 +0000   Sat, 22 Nov 2025 00:00:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.76
	  Hostname:    functional-783762
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001796Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001796Ki
	  pods:               110
	System Info:
	  Machine ID:                 d561c82d24c84e51aed0106657c2085c
	  System UUID:                d561c82d-24c8-4e51-aed0-106657c2085c
	  Boot ID:                    b0cf542f-e0b4-488f-b149-70c03b493ebb
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-2dc5f                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m54s
	  default                     hello-node-connect-7d85dfc575-cjd8h           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-qhcz2                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m52s
	  kube-system                 coredns-66bc5c9577-4hlw7                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     11m
	  kube-system                 etcd-functional-783762                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-functional-783762              250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-783762     200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-6cqt7                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-functional-783762              100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-xc2xc    0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m14s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-284lt         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node functional-783762 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node functional-783762 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node functional-783762 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node functional-783762 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node functional-783762 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node functional-783762 status is now: NodeHasSufficientPID
	  Normal  NodeReady                12m                kubelet          Node functional-783762 status is now: NodeReady
	  Normal  RegisteredNode           11m                node-controller  Node functional-783762 event: Registered Node functional-783762 in Controller
	  Normal  CIDRAssignmentFailed     11m                cidrAllocator    Node functional-783762 status is now: CIDRAssignmentFailed
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-783762 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-783762 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node functional-783762 status is now: NodeHasSufficientPID
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           11m                node-controller  Node functional-783762 event: Registered Node functional-783762 in Controller
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x3 over 10m)  kubelet          Node functional-783762 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x3 over 10m)  kubelet          Node functional-783762 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x3 over 10m)  kubelet          Node functional-783762 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           10m                node-controller  Node functional-783762 event: Registered Node functional-783762 in Controller
	
	
	==> dmesg <==
	[  +0.000013] (rpcbind)[118]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.184426] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.083639] kauditd_printk_skb: 1 callbacks suppressed
	[Nov22 00:00] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.145077] kauditd_printk_skb: 171 callbacks suppressed
	[  +5.340851] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.297709] kauditd_printk_skb: 252 callbacks suppressed
	[  +0.113526] kauditd_printk_skb: 44 callbacks suppressed
	[  +4.759596] kauditd_printk_skb: 56 callbacks suppressed
	[  +4.379699] kauditd_printk_skb: 284 callbacks suppressed
	[Nov22 00:01] kauditd_printk_skb: 98 callbacks suppressed
	[  +7.640718] kauditd_printk_skb: 32 callbacks suppressed
	[  +6.360006] kauditd_printk_skb: 2 callbacks suppressed
	[  +0.113329] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.570529] kauditd_printk_skb: 78 callbacks suppressed
	[  +1.004551] kauditd_printk_skb: 277 callbacks suppressed
	[  +4.444666] kauditd_printk_skb: 70 callbacks suppressed
	[Nov22 00:02] kauditd_printk_skb: 133 callbacks suppressed
	[  +6.671222] kauditd_printk_skb: 53 callbacks suppressed
	[ +19.842391] kauditd_printk_skb: 110 callbacks suppressed
	[Nov22 00:03] kauditd_printk_skb: 25 callbacks suppressed
	[Nov22 00:05] kauditd_printk_skb: 74 callbacks suppressed
	[Nov22 00:08] crun[10000]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	
	
	==> etcd [67c5ba38a97236a5f6237635c69b5123297b3dad277fc5793f3173862464829d] <==
	{"level":"warn","ts":"2025-11-22T00:01:49.561695Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:49.570903Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:49.578124Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:49.585459Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:49.598496Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:49.610962Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:49.614359Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:49.625644Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:49.637505Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:49.645274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:49.657386Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:49.668836Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:49.678186Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:49.688728Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:49.698178Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:49.707480Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:49.718525Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:49.733117Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:49.748915Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:49.768596Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:49.775168Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:49.821712Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42830","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-22T00:11:49.157830Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1100}
	{"level":"info","ts":"2025-11-22T00:11:49.183000Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1100,"took":"24.727214ms","hash":994266985,"current-db-size-bytes":3604480,"current-db-size":"3.6 MB","current-db-size-in-use-bytes":1638400,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2025-11-22T00:11:49.183108Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":994266985,"revision":1100,"compact-revision":-1}
	
	
	==> etcd [b5219aacb2ecc8f9ab6b7a4db8a50a890fedaa0848f7b8ec907cbcacfc3c6344] <==
	{"level":"warn","ts":"2025-11-22T00:01:02.790222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:02.806118Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:02.847308Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:02.872404Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:02.902639Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:02.922989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:01:03.026408Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51086","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-22T00:01:27.413794Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-22T00:01:27.413883Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-783762","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.76:2380"],"advertise-client-urls":["https://192.168.39.76:2379"]}
	{"level":"error","ts":"2025-11-22T00:01:27.420370Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-22T00:01:27.493936Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"info","ts":"2025-11-22T00:01:27.494086Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"4f06aa0eaa8889d9","current-leader-member-id":"4f06aa0eaa8889d9"}
	{"level":"info","ts":"2025-11-22T00:01:27.494207Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-22T00:01:27.494218Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"error","ts":"2025-11-22T00:01:27.493995Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-22T00:01:27.494425Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-22T00:01:27.494488Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-22T00:01:27.494495Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-22T00:01:27.494528Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.76:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-22T00:01:27.494535Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.76:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-22T00:01:27.494540Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.76:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-22T00:01:27.497717Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.76:2380"}
	{"level":"error","ts":"2025-11-22T00:01:27.497803Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.76:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-22T00:01:27.497831Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.76:2380"}
	{"level":"info","ts":"2025-11-22T00:01:27.497836Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-783762","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.76:2380"],"advertise-client-urls":["https://192.168.39.76:2379"]}
	
	
	==> kernel <==
	 00:12:12 up 12 min,  0 users,  load average: 0.48, 0.46, 0.33
	Linux functional-783762 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Nov 19 01:10:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [937b2a1e9b101160e7748ede304ccb8d1525f30ab7403c6da8c9e069f1e199d0] <==
	I1122 00:01:47.323261       1 options.go:263] external host was not specified, using 192.168.39.76
	I1122 00:01:47.342461       1 server.go:150] Version: v1.34.1
	I1122 00:01:47.344089       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E1122 00:01:47.350369       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	
	
	==> kube-apiserver [d861ebb3bc7a4b2eef3ae0f5920853815e45e2f5a7c904c15da9579bf1ed8354] <==
	I1122 00:01:50.584164       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1122 00:01:50.584812       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1122 00:01:50.613117       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1122 00:01:50.617194       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1122 00:01:50.617253       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1122 00:01:50.633429       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1122 00:01:50.633455       1 policy_source.go:240] refreshing policies
	I1122 00:01:50.668250       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1122 00:01:51.389775       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1122 00:01:52.299244       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1122 00:01:52.312669       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1122 00:01:52.363823       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1122 00:01:52.396406       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1122 00:01:52.411780       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1122 00:01:54.192121       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1122 00:01:54.242406       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1122 00:01:54.289696       1 controller.go:667] quota admission added evaluator for: endpoints
	I1122 00:02:05.981512       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.100.230.73"}
	I1122 00:02:10.365289       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.110.229.206"}
	I1122 00:02:11.008541       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.97.112.146"}
	I1122 00:02:18.200764       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.103.69.213"}
	I1122 00:03:57.914693       1 controller.go:667] quota admission added evaluator for: namespaces
	I1122 00:03:58.256928       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.197.188"}
	I1122 00:03:58.288243       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.112.221"}
	I1122 00:11:50.518403       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [04cf4193002456e3aa12568d11c5337af3f25e972ef32773b772906e32177b19] <==
	I1122 00:01:07.169996       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1122 00:01:07.170099       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1122 00:01:07.171872       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1122 00:01:07.172227       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1122 00:01:07.173981       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1122 00:01:07.174640       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1122 00:01:07.180084       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1122 00:01:07.180466       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1122 00:01:07.184965       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1122 00:01:07.184982       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1122 00:01:07.185568       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1122 00:01:07.188346       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1122 00:01:07.191724       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1122 00:01:07.191803       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1122 00:01:07.191883       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-783762"
	I1122 00:01:07.191943       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1122 00:01:07.200160       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1122 00:01:07.206539       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1122 00:01:07.208792       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1122 00:01:07.212867       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1122 00:01:07.212941       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1122 00:01:07.212952       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1122 00:01:07.212957       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1122 00:01:07.215230       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1122 00:01:07.216946       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-controller-manager [622d332041e786701191f78a09bf73dd9406c9297fe6831bf955fa085b914c9d] <==
	I1122 00:01:53.915146       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1122 00:01:53.926745       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1122 00:01:53.927772       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1122 00:01:53.930209       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1122 00:01:53.935999       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1122 00:01:53.936102       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1122 00:01:53.936128       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1122 00:01:53.936196       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1122 00:01:53.937461       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1122 00:01:53.937508       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1122 00:01:53.937509       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1122 00:01:53.938858       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1122 00:01:53.940109       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1122 00:01:53.940123       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1122 00:01:53.946617       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1122 00:01:53.946660       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1122 00:01:53.946667       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	E1122 00:03:58.049870       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1122 00:03:58.062735       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1122 00:03:58.063775       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1122 00:03:58.079287       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1122 00:03:58.079327       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1122 00:03:58.090808       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1122 00:03:58.091966       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1122 00:03:58.101149       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [c0a6e5bbcecef81643a8b1f25289fb2a355f42d76ed00755be12d759e4376728] <==
	I1122 00:01:05.199636       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1122 00:01:05.299976       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1122 00:01:05.300123       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.76"]
	E1122 00:01:05.300255       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1122 00:01:05.355005       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1122 00:01:05.355111       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1122 00:01:05.355133       1 server_linux.go:132] "Using iptables Proxier"
	I1122 00:01:05.368603       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1122 00:01:05.369525       1 server.go:527] "Version info" version="v1.34.1"
	I1122 00:01:05.370098       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:01:05.378679       1 config.go:200] "Starting service config controller"
	I1122 00:01:05.379181       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1122 00:01:05.379281       1 config.go:106] "Starting endpoint slice config controller"
	I1122 00:01:05.379359       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1122 00:01:05.379373       1 config.go:403] "Starting serviceCIDR config controller"
	I1122 00:01:05.379377       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1122 00:01:05.381930       1 config.go:309] "Starting node config controller"
	I1122 00:01:05.382118       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1122 00:01:05.382145       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1122 00:01:05.479858       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1122 00:01:05.479996       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1122 00:01:05.479915       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [c0c435c405010db72805d64d746d7f7105f7f12029df399df5284c3e900e2773] <==
	E1122 00:01:50.503748       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes \"functional-783762\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1122 00:01:56.849670       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1122 00:01:56.849736       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.76"]
	E1122 00:01:56.849823       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1122 00:01:56.891422       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1122 00:01:56.891506       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1122 00:01:56.891532       1 server_linux.go:132] "Using iptables Proxier"
	I1122 00:01:56.902453       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1122 00:01:56.902815       1 server.go:527] "Version info" version="v1.34.1"
	I1122 00:01:56.902851       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:01:56.907904       1 config.go:200] "Starting service config controller"
	I1122 00:01:56.908112       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1122 00:01:56.908180       1 config.go:106] "Starting endpoint slice config controller"
	I1122 00:01:56.908187       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1122 00:01:56.908199       1 config.go:403] "Starting serviceCIDR config controller"
	I1122 00:01:56.908204       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1122 00:01:56.912411       1 config.go:309] "Starting node config controller"
	I1122 00:01:56.912542       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1122 00:01:56.912551       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1122 00:01:57.008980       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1122 00:01:57.009061       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1122 00:01:57.009135       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [ae7059d8a3af78a4a0c906c515245dc6d4414fc28d580cab2fddc21a5a3e5385] <==
	I1122 00:01:46.029707       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1122 00:01:46.029751       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:01:46.032102       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:01:46.032145       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:01:46.032850       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1122 00:01:46.032926       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1122 00:01:46.132551       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1122 00:01:50.422801       1 reflector.go:205] "Failed to watch" err="csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1122 00:01:50.422864       1 reflector.go:205] "Failed to watch" err="resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1122 00:01:50.422881       1 reflector.go:205] "Failed to watch" err="storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1122 00:01:50.422897       1 reflector.go:205] "Failed to watch" err="resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1122 00:01:50.422911       1 reflector.go:205] "Failed to watch" err="persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1122 00:01:50.422922       1 reflector.go:205] "Failed to watch" err="pods is forbidden: User \"system:kube-scheduler\" cannot watch resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1122 00:01:50.422930       1 reflector.go:205] "Failed to watch" err="nodes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1122 00:01:50.422942       1 reflector.go:205] "Failed to watch" err="csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1122 00:01:50.422956       1 reflector.go:205] "Failed to watch" err="volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1122 00:01:50.423168       1 reflector.go:205] "Failed to watch" err="replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1122 00:01:50.423205       1 reflector.go:205] "Failed to watch" err="statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1122 00:01:50.423216       1 reflector.go:205] "Failed to watch" err="namespaces is forbidden: User \"system:kube-scheduler\" cannot watch resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1122 00:01:50.423229       1 reflector.go:205] "Failed to watch" err="poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot watch resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1122 00:01:50.423244       1 reflector.go:205] "Failed to watch" err="services is forbidden: User \"system:kube-scheduler\" cannot watch resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1122 00:01:50.433368       1 reflector.go:205] "Failed to watch" err="csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1122 00:01:50.433490       1 reflector.go:205] "Failed to watch" err="replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1122 00:01:50.433535       1 reflector.go:205] "Failed to watch" err="persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1122 00:01:50.433558       1 reflector.go:205] "Failed to watch" err="deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	
	
	==> kube-scheduler [f35efea65afed5a14e33a117e8d2e86725efb1398a611e9c9ad4b66ff388490c] <==
	I1122 00:01:01.732896       1 serving.go:386] Generated self-signed cert in-memory
	W1122 00:01:03.754650       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1122 00:01:03.754673       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1122 00:01:03.754681       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1122 00:01:03.754686       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1122 00:01:03.832520       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1122 00:01:03.834121       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:01:03.840611       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1122 00:01:03.840695       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:01:03.841229       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:01:03.840710       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1122 00:01:03.949210       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:01:27.426479       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1122 00:01:27.426592       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1122 00:01:27.426611       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1122 00:01:27.426632       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:01:27.426820       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1122 00:01:27.426855       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Nov 22 00:11:34 functional-783762 kubelet[6792]: E1122 00:11:34.470185    6792 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-xc2xc" podUID="8328e596-dc9b-4609-b515-a53fe2575073"
	Nov 22 00:11:36 functional-783762 kubelet[6792]: E1122 00:11:36.058356    6792 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763770296056576570  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:203237}  inodes_used:{value:105}}"
	Nov 22 00:11:36 functional-783762 kubelet[6792]: E1122 00:11:36.058852    6792 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763770296056576570  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:203237}  inodes_used:{value:105}}"
	Nov 22 00:11:43 functional-783762 kubelet[6792]: E1122 00:11:43.544777    6792 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-qhcz2" podUID="bad1e5d7-de2e-4b08-b406-2e87deac3a9c"
	Nov 22 00:11:45 functional-783762 kubelet[6792]: E1122 00:11:45.842525    6792 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod3381fa89-cdde-43bf-a38c-f140281f28af/crio-e4ec1069bc01988ab7cbadc862ca142d62f6817017f947a57a9667444a97f511: Error finding container e4ec1069bc01988ab7cbadc862ca142d62f6817017f947a57a9667444a97f511: Status 404 returned error can't find the container with id e4ec1069bc01988ab7cbadc862ca142d62f6817017f947a57a9667444a97f511
	Nov 22 00:11:45 functional-783762 kubelet[6792]: E1122 00:11:45.843495    6792 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod9b393547de92689fa32a14bea69079b0/crio-39e5a20696df9463955e23ed29895cc87e900159279313bc5090d724c85ccfdf: Error finding container 39e5a20696df9463955e23ed29895cc87e900159279313bc5090d724c85ccfdf: Status 404 returned error can't find the container with id 39e5a20696df9463955e23ed29895cc87e900159279313bc5090d724c85ccfdf
	Nov 22 00:11:45 functional-783762 kubelet[6792]: E1122 00:11:45.843910    6792 manager.go:1116] Failed to create existing container: /kubepods/burstable/podd6a6bb5d624dd9724d6436c47c57eec9/crio-6ff5789062c7b88d0e5c9915aa01dc4f6932528a6e55ad5e178dfd4d537a1b48: Error finding container 6ff5789062c7b88d0e5c9915aa01dc4f6932528a6e55ad5e178dfd4d537a1b48: Status 404 returned error can't find the container with id 6ff5789062c7b88d0e5c9915aa01dc4f6932528a6e55ad5e178dfd4d537a1b48
	Nov 22 00:11:45 functional-783762 kubelet[6792]: E1122 00:11:45.844190    6792 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod5017c5ab7e5f7612c69e258534ddf5e4/crio-e9301a0a5805e97cdf0f30e8f35504b79f69bab035854b5cb6172b3633607a5d: Error finding container e9301a0a5805e97cdf0f30e8f35504b79f69bab035854b5cb6172b3633607a5d: Status 404 returned error can't find the container with id e9301a0a5805e97cdf0f30e8f35504b79f69bab035854b5cb6172b3633607a5d
	Nov 22 00:11:45 functional-783762 kubelet[6792]: E1122 00:11:45.844598    6792 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod5cb72e66-14a4-4209-861c-be0707b73762/crio-a513bd37a85301e4f5b8ed3fdfc3c5d95ac0b6eb70bae7f406c98f01b2570972: Error finding container a513bd37a85301e4f5b8ed3fdfc3c5d95ac0b6eb70bae7f406c98f01b2570972: Status 404 returned error can't find the container with id a513bd37a85301e4f5b8ed3fdfc3c5d95ac0b6eb70bae7f406c98f01b2570972
	Nov 22 00:11:46 functional-783762 kubelet[6792]: E1122 00:11:46.061595    6792 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763770306061113979  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:203237}  inodes_used:{value:105}}"
	Nov 22 00:11:46 functional-783762 kubelet[6792]: E1122 00:11:46.061641    6792 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763770306061113979  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:203237}  inodes_used:{value:105}}"
	Nov 22 00:11:46 functional-783762 kubelet[6792]: E1122 00:11:46.543073    6792 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-2dc5f" podUID="db6d5163-56fa-41b6-9f55-3ebb021aa3c3"
	Nov 22 00:11:48 functional-783762 kubelet[6792]: E1122 00:11:48.544082    6792 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-xc2xc" podUID="8328e596-dc9b-4609-b515-a53fe2575073"
	Nov 22 00:11:56 functional-783762 kubelet[6792]: E1122 00:11:56.063361    6792 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763770316062790467  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:203237}  inodes_used:{value:105}}"
	Nov 22 00:11:56 functional-783762 kubelet[6792]: E1122 00:11:56.063458    6792 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763770316062790467  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:203237}  inodes_used:{value:105}}"
	Nov 22 00:11:57 functional-783762 kubelet[6792]: E1122 00:11:57.542815    6792 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-2dc5f" podUID="db6d5163-56fa-41b6-9f55-3ebb021aa3c3"
	Nov 22 00:12:00 functional-783762 kubelet[6792]: E1122 00:12:00.546870    6792 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-xc2xc" podUID="8328e596-dc9b-4609-b515-a53fe2575073"
	Nov 22 00:12:04 functional-783762 kubelet[6792]: E1122 00:12:04.569796    6792 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Nov 22 00:12:04 functional-783762 kubelet[6792]: E1122 00:12:04.569871    6792 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Nov 22 00:12:04 functional-783762 kubelet[6792]: E1122 00:12:04.570200    6792 kuberuntime_manager.go:1449] "Unhandled Error" err="container kubernetes-dashboard start failed in pod kubernetes-dashboard-855c9754f9-284lt_kubernetes-dashboard(5a35c730-8ed4-4877-b4b8-bfb41a41da80): ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Nov 22 00:12:04 functional-783762 kubelet[6792]: E1122 00:12:04.570263    6792 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ErrImagePull: \"reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-284lt" podUID="5a35c730-8ed4-4877-b4b8-bfb41a41da80"
	Nov 22 00:12:06 functional-783762 kubelet[6792]: E1122 00:12:06.066522    6792 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763770326065962353  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:203237}  inodes_used:{value:105}}"
	Nov 22 00:12:06 functional-783762 kubelet[6792]: E1122 00:12:06.066636    6792 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763770326065962353  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:203237}  inodes_used:{value:105}}"
	Nov 22 00:12:11 functional-783762 kubelet[6792]: E1122 00:12:11.543372    6792 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-2dc5f" podUID="db6d5163-56fa-41b6-9f55-3ebb021aa3c3"
	Nov 22 00:12:12 functional-783762 kubelet[6792]: E1122 00:12:12.544924    6792 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-xc2xc" podUID="8328e596-dc9b-4609-b515-a53fe2575073"
	
	
	==> storage-provisioner [bb5168f65cd008045da49893b96bed7e32efb8346d66a7d9d40934ae8e2e9d0e] <==
	W1122 00:11:48.589954       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:11:50.593878       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:11:50.603758       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:11:52.607851       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:11:52.616811       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:11:54.620821       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:11:54.628872       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:11:56.633881       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:11:56.640001       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:11:58.643706       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:11:58.653988       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:12:00.657738       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:12:00.664081       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:12:02.668326       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:12:02.677172       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:12:04.680463       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:12:04.689779       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:12:06.694630       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:12:06.700633       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:12:08.704990       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:12:08.710700       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:12:10.714447       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:12:10.720469       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:12:12.724637       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:12:12.734105       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [e3cfcf54044a53d7dd88d7db26f1e416d71caaab702bc46737a7be38cccdb604] <==
	I1122 00:01:42.079950       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1122 00:01:42.087124       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-783762 -n functional-783762
helpers_test.go:269: (dbg) Run:  kubectl --context functional-783762 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-2dc5f mysql-5bb876957f-qhcz2 sp-pod dashboard-metrics-scraper-77bf4d6c4c-xc2xc kubernetes-dashboard-855c9754f9-284lt
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/MySQL]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-783762 describe pod busybox-mount hello-node-75c85bcc94-2dc5f mysql-5bb876957f-qhcz2 sp-pod dashboard-metrics-scraper-77bf4d6c4c-xc2xc kubernetes-dashboard-855c9754f9-284lt
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-783762 describe pod busybox-mount hello-node-75c85bcc94-2dc5f mysql-5bb876957f-qhcz2 sp-pod dashboard-metrics-scraper-77bf4d6c4c-xc2xc kubernetes-dashboard-855c9754f9-284lt: exit status 1 (106.368304ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-783762/192.168.39.76
	Start Time:       Sat, 22 Nov 2025 00:02:22 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  cri-o://95d552f310077dbb71aa5b22ef7be71d04c3592f8270330f3dc6f8195d61e047
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 22 Nov 2025 00:03:50 +0000
	      Finished:     Sat, 22 Nov 2025 00:03:50 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lhmwv (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-lhmwv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  9m51s  default-scheduler  Successfully assigned default/busybox-mount to functional-783762
	  Normal  Pulling    9m51s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     8m23s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.335s (1m28.082s including waiting). Image size: 4631262 bytes.
	  Normal  Created    8m23s  kubelet            Created container: mount-munger
	  Normal  Started    8m23s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-2dc5f
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-783762/192.168.39.76
	Start Time:       Sat, 22 Nov 2025 00:02:18 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:           10.244.0.9
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-svcql (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-svcql:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m55s                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-2dc5f to functional-783762
	  Warning  Failed     4m16s (x3 over 9m1s)   kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    3m23s (x4 over 9m55s)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     69s (x4 over 9m1s)     kubelet            Error: ErrImagePull
	  Warning  Failed     69s                    kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    2s (x10 over 9m1s)     kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     2s (x10 over 9m1s)     kubelet            Error: ImagePullBackOff
	
	
	Name:             mysql-5bb876957f-qhcz2
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-783762/192.168.39.76
	Start Time:       Sat, 22 Nov 2025 00:02:11 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dxdkj (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-dxdkj:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/mysql-5bb876957f-qhcz2 to functional-783762
	  Warning  Failed     4m46s                 kubelet            Failed to pull image "docker.io/mysql:5.7": fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     105s (x3 over 9m31s)  kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     105s (x4 over 9m31s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    30s (x11 over 9m31s)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     30s (x11 over 9m31s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    18s (x5 over 10m)     kubelet            Pulling image "docker.io/mysql:5.7"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-783762/192.168.39.76
	Start Time:       Sat, 22 Nov 2025 00:02:20 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zv6fz (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-zv6fz:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m53s                  default-scheduler  Successfully assigned default/sp-pod to functional-783762
	  Warning  Failed     8m25s                  kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m15s (x3 over 8m25s)  kubelet            Error: ErrImagePull
	  Warning  Failed     2m15s (x2 over 5m22s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    98s (x5 over 8m25s)    kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     98s (x5 over 8m25s)    kubelet            Error: ImagePullBackOff
	  Normal   Pulling    86s (x4 over 9m52s)    kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-xc2xc" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-284lt" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-783762 describe pod busybox-mount hello-node-75c85bcc94-2dc5f mysql-5bb876957f-qhcz2 sp-pod dashboard-metrics-scraper-77bf4d6c4c-xc2xc kubernetes-dashboard-855c9754f9-284lt: exit status 1
--- FAIL: TestFunctional/parallel/MySQL (602.93s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-783762 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-783762 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-2dc5f" [db6d5163-56fa-41b6-9f55-3ebb021aa3c3] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-783762 -n functional-783762
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-11-22 00:12:18.465792147 +0000 UTC m=+1543.330083839
functional_test.go:1460: (dbg) Run:  kubectl --context functional-783762 describe po hello-node-75c85bcc94-2dc5f -n default
functional_test.go:1460: (dbg) kubectl --context functional-783762 describe po hello-node-75c85bcc94-2dc5f -n default:
Name:             hello-node-75c85bcc94-2dc5f
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-783762/192.168.39.76
Start Time:       Sat, 22 Nov 2025 00:02:18 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-svcql (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-svcql:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-2dc5f to functional-783762
Warning  Failed     4m21s (x3 over 9m6s)  kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    3m28s (x4 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     74s (x4 over 9m6s)    kubelet            Error: ErrImagePull
Warning  Failed     74s                   kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    7s (x10 over 9m6s)    kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     7s (x10 over 9m6s)    kubelet            Error: ImagePullBackOff
functional_test.go:1460: (dbg) Run:  kubectl --context functional-783762 logs hello-node-75c85bcc94-2dc5f -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-783762 logs hello-node-75c85bcc94-2dc5f -n default: exit status 1 (76.490774ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-2dc5f" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-783762 logs hello-node-75c85bcc94-2dc5f -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-783762 service --namespace=default --https --url hello-node: exit status 115 (270.58145ms)

                                                
                                                
-- stdout --
	https://192.168.39.76:31922
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-783762 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-783762 service hello-node --url --format={{.IP}}: exit status 115 (267.563486ms)

                                                
                                                
-- stdout --
	192.168.39.76
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-783762 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-783762 service hello-node --url: exit status 115 (283.061373ms)

                                                
                                                
-- stdout --
	http://192.168.39.76:31922
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-783762 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.76:31922
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.28s)

                                                
                                    
x
+
TestPreload (161.62s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-193194 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-193194 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0: (1m40.319303154s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-193194 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-193194 image pull gcr.io/k8s-minikube/busybox: (2.636562484s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-193194
E1122 00:53:58.476258  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-193194: (6.94188866s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-193194 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-193194 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (48.914291705s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-193194 image list
preload_test.go:75: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10
	registry.k8s.io/kube-scheduler:v1.32.0
	registry.k8s.io/kube-proxy:v1.32.0
	registry.k8s.io/kube-controller-manager:v1.32.0
	registry.k8s.io/kube-apiserver:v1.32.0
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20241108-5c6d2daf

                                                
                                                
-- /stdout --
panic.go:615: *** TestPreload FAILED at 2025-11-22 00:54:50.371402814 +0000 UTC m=+4095.235694506
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-193194 -n test-preload-193194
helpers_test.go:252: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-193194 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p test-preload-193194 logs -n 25: (1.03540989s)
helpers_test.go:260: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-267585 ssh -n multinode-267585-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-267585     │ jenkins │ v1.37.0 │ 22 Nov 25 00:41 UTC │ 22 Nov 25 00:41 UTC │
	│ ssh     │ multinode-267585 ssh -n multinode-267585 sudo cat /home/docker/cp-test_multinode-267585-m03_multinode-267585.txt                                          │ multinode-267585     │ jenkins │ v1.37.0 │ 22 Nov 25 00:41 UTC │ 22 Nov 25 00:41 UTC │
	│ cp      │ multinode-267585 cp multinode-267585-m03:/home/docker/cp-test.txt multinode-267585-m02:/home/docker/cp-test_multinode-267585-m03_multinode-267585-m02.txt │ multinode-267585     │ jenkins │ v1.37.0 │ 22 Nov 25 00:41 UTC │ 22 Nov 25 00:41 UTC │
	│ ssh     │ multinode-267585 ssh -n multinode-267585-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-267585     │ jenkins │ v1.37.0 │ 22 Nov 25 00:41 UTC │ 22 Nov 25 00:41 UTC │
	│ ssh     │ multinode-267585 ssh -n multinode-267585-m02 sudo cat /home/docker/cp-test_multinode-267585-m03_multinode-267585-m02.txt                                  │ multinode-267585     │ jenkins │ v1.37.0 │ 22 Nov 25 00:41 UTC │ 22 Nov 25 00:41 UTC │
	│ node    │ multinode-267585 node stop m03                                                                                                                            │ multinode-267585     │ jenkins │ v1.37.0 │ 22 Nov 25 00:41 UTC │ 22 Nov 25 00:41 UTC │
	│ node    │ multinode-267585 node start m03 -v=5 --alsologtostderr                                                                                                    │ multinode-267585     │ jenkins │ v1.37.0 │ 22 Nov 25 00:41 UTC │ 22 Nov 25 00:42 UTC │
	│ node    │ list -p multinode-267585                                                                                                                                  │ multinode-267585     │ jenkins │ v1.37.0 │ 22 Nov 25 00:42 UTC │                     │
	│ stop    │ -p multinode-267585                                                                                                                                       │ multinode-267585     │ jenkins │ v1.37.0 │ 22 Nov 25 00:42 UTC │ 22 Nov 25 00:44 UTC │
	│ start   │ -p multinode-267585 --wait=true -v=5 --alsologtostderr                                                                                                    │ multinode-267585     │ jenkins │ v1.37.0 │ 22 Nov 25 00:44 UTC │ 22 Nov 25 00:47 UTC │
	│ node    │ list -p multinode-267585                                                                                                                                  │ multinode-267585     │ jenkins │ v1.37.0 │ 22 Nov 25 00:47 UTC │                     │
	│ node    │ multinode-267585 node delete m03                                                                                                                          │ multinode-267585     │ jenkins │ v1.37.0 │ 22 Nov 25 00:47 UTC │ 22 Nov 25 00:47 UTC │
	│ stop    │ multinode-267585 stop                                                                                                                                     │ multinode-267585     │ jenkins │ v1.37.0 │ 22 Nov 25 00:47 UTC │ 22 Nov 25 00:49 UTC │
	│ start   │ -p multinode-267585 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio                                                            │ multinode-267585     │ jenkins │ v1.37.0 │ 22 Nov 25 00:49 UTC │ 22 Nov 25 00:51 UTC │
	│ node    │ list -p multinode-267585                                                                                                                                  │ multinode-267585     │ jenkins │ v1.37.0 │ 22 Nov 25 00:51 UTC │                     │
	│ start   │ -p multinode-267585-m02 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-267585-m02 │ jenkins │ v1.37.0 │ 22 Nov 25 00:51 UTC │                     │
	│ start   │ -p multinode-267585-m03 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-267585-m03 │ jenkins │ v1.37.0 │ 22 Nov 25 00:51 UTC │ 22 Nov 25 00:52 UTC │
	│ node    │ add -p multinode-267585                                                                                                                                   │ multinode-267585     │ jenkins │ v1.37.0 │ 22 Nov 25 00:52 UTC │                     │
	│ delete  │ -p multinode-267585-m03                                                                                                                                   │ multinode-267585-m03 │ jenkins │ v1.37.0 │ 22 Nov 25 00:52 UTC │ 22 Nov 25 00:52 UTC │
	│ delete  │ -p multinode-267585                                                                                                                                       │ multinode-267585     │ jenkins │ v1.37.0 │ 22 Nov 25 00:52 UTC │ 22 Nov 25 00:52 UTC │
	│ start   │ -p test-preload-193194 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0   │ test-preload-193194  │ jenkins │ v1.37.0 │ 22 Nov 25 00:52 UTC │ 22 Nov 25 00:53 UTC │
	│ image   │ test-preload-193194 image pull gcr.io/k8s-minikube/busybox                                                                                                │ test-preload-193194  │ jenkins │ v1.37.0 │ 22 Nov 25 00:53 UTC │ 22 Nov 25 00:53 UTC │
	│ stop    │ -p test-preload-193194                                                                                                                                    │ test-preload-193194  │ jenkins │ v1.37.0 │ 22 Nov 25 00:53 UTC │ 22 Nov 25 00:54 UTC │
	│ start   │ -p test-preload-193194 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio                                           │ test-preload-193194  │ jenkins │ v1.37.0 │ 22 Nov 25 00:54 UTC │ 22 Nov 25 00:54 UTC │
	│ image   │ test-preload-193194 image list                                                                                                                            │ test-preload-193194  │ jenkins │ v1.37.0 │ 22 Nov 25 00:54 UTC │ 22 Nov 25 00:54 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/22 00:54:01
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1122 00:54:01.315480  279486 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:54:01.315625  279486 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:54:01.315635  279486 out.go:374] Setting ErrFile to fd 2...
	I1122 00:54:01.315640  279486 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:54:01.315886  279486 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-244751/.minikube/bin
	I1122 00:54:01.316386  279486 out.go:368] Setting JSON to false
	I1122 00:54:01.317275  279486 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":30969,"bootTime":1763741872,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1122 00:54:01.317342  279486 start.go:143] virtualization: kvm guest
	I1122 00:54:01.319753  279486 out.go:179] * [test-preload-193194] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1122 00:54:01.321236  279486 out.go:179]   - MINIKUBE_LOCATION=21934
	I1122 00:54:01.321249  279486 notify.go:221] Checking for updates...
	I1122 00:54:01.323883  279486 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 00:54:01.325212  279486 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-244751/kubeconfig
	I1122 00:54:01.326485  279486 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-244751/.minikube
	I1122 00:54:01.327851  279486 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1122 00:54:01.329135  279486 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 00:54:01.330977  279486 config.go:182] Loaded profile config "test-preload-193194": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1122 00:54:01.332792  279486 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1122 00:54:01.334090  279486 driver.go:422] Setting default libvirt URI to qemu:///system
	I1122 00:54:01.368491  279486 out.go:179] * Using the kvm2 driver based on existing profile
	I1122 00:54:01.369903  279486 start.go:309] selected driver: kvm2
	I1122 00:54:01.369923  279486 start.go:930] validating driver "kvm2" against &{Name:test-preload-193194 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.32.0 ClusterName:test-preload-193194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.70 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:54:01.370033  279486 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 00:54:01.370937  279486 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:54:01.370972  279486 cni.go:84] Creating CNI manager for ""
	I1122 00:54:01.371028  279486 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1122 00:54:01.371080  279486 start.go:353] cluster config:
	{Name:test-preload-193194 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-193194 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.70 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:54:01.371174  279486 iso.go:125] acquiring lock: {Name:mkc83d3435f1eaa5a92358fc78f85b7d74048deb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:54:01.372840  279486 out.go:179] * Starting "test-preload-193194" primary control-plane node in "test-preload-193194" cluster
	I1122 00:54:01.374280  279486 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1122 00:54:01.393805  279486 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1122 00:54:01.393844  279486 cache.go:65] Caching tarball of preloaded images
	I1122 00:54:01.394027  279486 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1122 00:54:01.395874  279486 out.go:179] * Downloading Kubernetes v1.32.0 preload ...
	I1122 00:54:01.397122  279486 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1122 00:54:01.420810  279486 preload.go:295] Got checksum from GCS API "2acdb4dde52794f2167c79dcee7507ae"
	I1122 00:54:01.420881  279486 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2acdb4dde52794f2167c79dcee7507ae -> /home/jenkins/minikube-integration/21934-244751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1122 00:54:04.007199  279486 cache.go:68] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1122 00:54:04.007342  279486 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/test-preload-193194/config.json ...
	I1122 00:54:04.007591  279486 start.go:360] acquireMachinesLock for test-preload-193194: {Name:mk0193f6f5636a08cc7939c91649c5870e7698fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1122 00:54:04.007664  279486 start.go:364] duration metric: took 47.713µs to acquireMachinesLock for "test-preload-193194"
	I1122 00:54:04.007700  279486 start.go:96] Skipping create...Using existing machine configuration
	I1122 00:54:04.007708  279486 fix.go:54] fixHost starting: 
	I1122 00:54:04.009780  279486 fix.go:112] recreateIfNeeded on test-preload-193194: state=Stopped err=<nil>
	W1122 00:54:04.009812  279486 fix.go:138] unexpected machine state, will restart: <nil>
	I1122 00:54:04.011858  279486 out.go:252] * Restarting existing kvm2 VM for "test-preload-193194" ...
	I1122 00:54:04.011935  279486 main.go:143] libmachine: starting domain...
	I1122 00:54:04.011950  279486 main.go:143] libmachine: ensuring networks are active...
	I1122 00:54:04.012799  279486 main.go:143] libmachine: Ensuring network default is active
	I1122 00:54:04.013260  279486 main.go:143] libmachine: Ensuring network mk-test-preload-193194 is active
	I1122 00:54:04.013733  279486 main.go:143] libmachine: getting domain XML...
	I1122 00:54:04.014937  279486 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>test-preload-193194</name>
	  <uuid>dc2a322d-494b-44fa-b467-0012e3756d23</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21934-244751/.minikube/machines/test-preload-193194/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21934-244751/.minikube/machines/test-preload-193194/test-preload-193194.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:45:9e:73'/>
	      <source network='mk-test-preload-193194'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:e7:19:19'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1122 00:54:05.291470  279486 main.go:143] libmachine: waiting for domain to start...
	I1122 00:54:05.292941  279486 main.go:143] libmachine: domain is now running
	I1122 00:54:05.292961  279486 main.go:143] libmachine: waiting for IP...
	I1122 00:54:05.293727  279486 main.go:143] libmachine: domain test-preload-193194 has defined MAC address 52:54:00:45:9e:73 in network mk-test-preload-193194
	I1122 00:54:05.294401  279486 main.go:143] libmachine: domain test-preload-193194 has current primary IP address 192.168.39.70 and MAC address 52:54:00:45:9e:73 in network mk-test-preload-193194
	I1122 00:54:05.294417  279486 main.go:143] libmachine: found domain IP: 192.168.39.70
	I1122 00:54:05.294432  279486 main.go:143] libmachine: reserving static IP address...
	I1122 00:54:05.294792  279486 main.go:143] libmachine: found host DHCP lease matching {name: "test-preload-193194", mac: "52:54:00:45:9e:73", ip: "192.168.39.70"} in network mk-test-preload-193194: {Iface:virbr1 ExpiryTime:2025-11-22 01:52:27 +0000 UTC Type:0 Mac:52:54:00:45:9e:73 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:test-preload-193194 Clientid:01:52:54:00:45:9e:73}
	I1122 00:54:05.294817  279486 main.go:143] libmachine: skip adding static IP to network mk-test-preload-193194 - found existing host DHCP lease matching {name: "test-preload-193194", mac: "52:54:00:45:9e:73", ip: "192.168.39.70"}
	I1122 00:54:05.294825  279486 main.go:143] libmachine: reserved static IP address 192.168.39.70 for domain test-preload-193194
	I1122 00:54:05.294833  279486 main.go:143] libmachine: waiting for SSH...
	I1122 00:54:05.294838  279486 main.go:143] libmachine: Getting to WaitForSSH function...
	I1122 00:54:05.297212  279486 main.go:143] libmachine: domain test-preload-193194 has defined MAC address 52:54:00:45:9e:73 in network mk-test-preload-193194
	I1122 00:54:05.297578  279486 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:45:9e:73", ip: ""} in network mk-test-preload-193194: {Iface:virbr1 ExpiryTime:2025-11-22 01:52:27 +0000 UTC Type:0 Mac:52:54:00:45:9e:73 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:test-preload-193194 Clientid:01:52:54:00:45:9e:73}
	I1122 00:54:05.297600  279486 main.go:143] libmachine: domain test-preload-193194 has defined IP address 192.168.39.70 and MAC address 52:54:00:45:9e:73 in network mk-test-preload-193194
	I1122 00:54:05.297778  279486 main.go:143] libmachine: Using SSH client type: native
	I1122 00:54:05.298013  279486 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I1122 00:54:05.298028  279486 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1122 00:54:08.407947  279486 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1122 00:54:14.488038  279486 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: no route to host
	I1122 00:54:17.489062  279486 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.70:22: connect: connection refused
	I1122 00:54:20.600597  279486 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1122 00:54:20.604691  279486 main.go:143] libmachine: domain test-preload-193194 has defined MAC address 52:54:00:45:9e:73 in network mk-test-preload-193194
	I1122 00:54:20.605128  279486 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:45:9e:73", ip: ""} in network mk-test-preload-193194: {Iface:virbr1 ExpiryTime:2025-11-22 01:54:16 +0000 UTC Type:0 Mac:52:54:00:45:9e:73 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:test-preload-193194 Clientid:01:52:54:00:45:9e:73}
	I1122 00:54:20.605154  279486 main.go:143] libmachine: domain test-preload-193194 has defined IP address 192.168.39.70 and MAC address 52:54:00:45:9e:73 in network mk-test-preload-193194
	I1122 00:54:20.605345  279486 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/test-preload-193194/config.json ...
	I1122 00:54:20.605555  279486 machine.go:94] provisionDockerMachine start ...
	I1122 00:54:20.607741  279486 main.go:143] libmachine: domain test-preload-193194 has defined MAC address 52:54:00:45:9e:73 in network mk-test-preload-193194
	I1122 00:54:20.608116  279486 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:45:9e:73", ip: ""} in network mk-test-preload-193194: {Iface:virbr1 ExpiryTime:2025-11-22 01:54:16 +0000 UTC Type:0 Mac:52:54:00:45:9e:73 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:test-preload-193194 Clientid:01:52:54:00:45:9e:73}
	I1122 00:54:20.608157  279486 main.go:143] libmachine: domain test-preload-193194 has defined IP address 192.168.39.70 and MAC address 52:54:00:45:9e:73 in network mk-test-preload-193194
	I1122 00:54:20.608308  279486 main.go:143] libmachine: Using SSH client type: native
	I1122 00:54:20.608505  279486 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I1122 00:54:20.608515  279486 main.go:143] libmachine: About to run SSH command:
	hostname
	I1122 00:54:20.715805  279486 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1122 00:54:20.715841  279486 buildroot.go:166] provisioning hostname "test-preload-193194"
	I1122 00:54:20.718979  279486 main.go:143] libmachine: domain test-preload-193194 has defined MAC address 52:54:00:45:9e:73 in network mk-test-preload-193194
	I1122 00:54:20.719385  279486 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:45:9e:73", ip: ""} in network mk-test-preload-193194: {Iface:virbr1 ExpiryTime:2025-11-22 01:54:16 +0000 UTC Type:0 Mac:52:54:00:45:9e:73 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:test-preload-193194 Clientid:01:52:54:00:45:9e:73}
	I1122 00:54:20.719407  279486 main.go:143] libmachine: domain test-preload-193194 has defined IP address 192.168.39.70 and MAC address 52:54:00:45:9e:73 in network mk-test-preload-193194
	I1122 00:54:20.719598  279486 main.go:143] libmachine: Using SSH client type: native
	I1122 00:54:20.719812  279486 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I1122 00:54:20.719823  279486 main.go:143] libmachine: About to run SSH command:
	sudo hostname test-preload-193194 && echo "test-preload-193194" | sudo tee /etc/hostname
	I1122 00:54:20.846884  279486 main.go:143] libmachine: SSH cmd err, output: <nil>: test-preload-193194
	
	I1122 00:54:20.850082  279486 main.go:143] libmachine: domain test-preload-193194 has defined MAC address 52:54:00:45:9e:73 in network mk-test-preload-193194
	I1122 00:54:20.850521  279486 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:45:9e:73", ip: ""} in network mk-test-preload-193194: {Iface:virbr1 ExpiryTime:2025-11-22 01:54:16 +0000 UTC Type:0 Mac:52:54:00:45:9e:73 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:test-preload-193194 Clientid:01:52:54:00:45:9e:73}
	I1122 00:54:20.850550  279486 main.go:143] libmachine: domain test-preload-193194 has defined IP address 192.168.39.70 and MAC address 52:54:00:45:9e:73 in network mk-test-preload-193194
	I1122 00:54:20.850761  279486 main.go:143] libmachine: Using SSH client type: native
	I1122 00:54:20.851046  279486 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I1122 00:54:20.851069  279486 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-193194' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-193194/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-193194' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1122 00:54:20.972948  279486 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1122 00:54:20.973001  279486 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21934-244751/.minikube CaCertPath:/home/jenkins/minikube-integration/21934-244751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21934-244751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21934-244751/.minikube}
	I1122 00:54:20.973031  279486 buildroot.go:174] setting up certificates
	I1122 00:54:20.973043  279486 provision.go:84] configureAuth start
	I1122 00:54:20.976451  279486 main.go:143] libmachine: domain test-preload-193194 has defined MAC address 52:54:00:45:9e:73 in network mk-test-preload-193194
	I1122 00:54:20.976922  279486 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:45:9e:73", ip: ""} in network mk-test-preload-193194: {Iface:virbr1 ExpiryTime:2025-11-22 01:54:16 +0000 UTC Type:0 Mac:52:54:00:45:9e:73 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:test-preload-193194 Clientid:01:52:54:00:45:9e:73}
	I1122 00:54:20.976965  279486 main.go:143] libmachine: domain test-preload-193194 has defined IP address 192.168.39.70 and MAC address 52:54:00:45:9e:73 in network mk-test-preload-193194
	I1122 00:54:20.979365  279486 main.go:143] libmachine: domain test-preload-193194 has defined MAC address 52:54:00:45:9e:73 in network mk-test-preload-193194
	I1122 00:54:20.979722  279486 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:45:9e:73", ip: ""} in network mk-test-preload-193194: {Iface:virbr1 ExpiryTime:2025-11-22 01:54:16 +0000 UTC Type:0 Mac:52:54:00:45:9e:73 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:test-preload-193194 Clientid:01:52:54:00:45:9e:73}
	I1122 00:54:20.979743  279486 main.go:143] libmachine: domain test-preload-193194 has defined IP address 192.168.39.70 and MAC address 52:54:00:45:9e:73 in network mk-test-preload-193194
	I1122 00:54:20.979853  279486 provision.go:143] copyHostCerts
	I1122 00:54:20.979917  279486 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-244751/.minikube/ca.pem, removing ...
	I1122 00:54:20.979945  279486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-244751/.minikube/ca.pem
	I1122 00:54:20.980042  279486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-244751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21934-244751/.minikube/ca.pem (1078 bytes)
	I1122 00:54:20.980175  279486 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-244751/.minikube/cert.pem, removing ...
	I1122 00:54:20.980187  279486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-244751/.minikube/cert.pem
	I1122 00:54:20.980219  279486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-244751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21934-244751/.minikube/cert.pem (1123 bytes)
	I1122 00:54:20.980294  279486 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-244751/.minikube/key.pem, removing ...
	I1122 00:54:20.980302  279486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-244751/.minikube/key.pem
	I1122 00:54:20.980328  279486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-244751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21934-244751/.minikube/key.pem (1679 bytes)
	I1122 00:54:20.980391  279486 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21934-244751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21934-244751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21934-244751/.minikube/certs/ca-key.pem org=jenkins.test-preload-193194 san=[127.0.0.1 192.168.39.70 localhost minikube test-preload-193194]
	I1122 00:54:21.072269  279486 provision.go:177] copyRemoteCerts
	I1122 00:54:21.072336  279486 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1122 00:54:21.075175  279486 main.go:143] libmachine: domain test-preload-193194 has defined MAC address 52:54:00:45:9e:73 in network mk-test-preload-193194
	I1122 00:54:21.075537  279486 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:45:9e:73", ip: ""} in network mk-test-preload-193194: {Iface:virbr1 ExpiryTime:2025-11-22 01:54:16 +0000 UTC Type:0 Mac:52:54:00:45:9e:73 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:test-preload-193194 Clientid:01:52:54:00:45:9e:73}
	I1122 00:54:21.075559  279486 main.go:143] libmachine: domain test-preload-193194 has defined IP address 192.168.39.70 and MAC address 52:54:00:45:9e:73 in network mk-test-preload-193194
	I1122 00:54:21.075694  279486 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/test-preload-193194/id_rsa Username:docker}
	I1122 00:54:21.161860  279486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1122 00:54:21.194270  279486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1122 00:54:21.225474  279486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1122 00:54:21.257174  279486 provision.go:87] duration metric: took 284.115779ms to configureAuth
	I1122 00:54:21.257210  279486 buildroot.go:189] setting minikube options for container-runtime
	I1122 00:54:21.257400  279486 config.go:182] Loaded profile config "test-preload-193194": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1122 00:54:21.260233  279486 main.go:143] libmachine: domain test-preload-193194 has defined MAC address 52:54:00:45:9e:73 in network mk-test-preload-193194
	I1122 00:54:21.260785  279486 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:45:9e:73", ip: ""} in network mk-test-preload-193194: {Iface:virbr1 ExpiryTime:2025-11-22 01:54:16 +0000 UTC Type:0 Mac:52:54:00:45:9e:73 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:test-preload-193194 Clientid:01:52:54:00:45:9e:73}
	I1122 00:54:21.260813  279486 main.go:143] libmachine: domain test-preload-193194 has defined IP address 192.168.39.70 and MAC address 52:54:00:45:9e:73 in network mk-test-preload-193194
	I1122 00:54:21.261006  279486 main.go:143] libmachine: Using SSH client type: native
	I1122 00:54:21.261238  279486 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I1122 00:54:21.261256  279486 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1122 00:54:21.526362  279486 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1122 00:54:21.526409  279486 machine.go:97] duration metric: took 920.837018ms to provisionDockerMachine
	I1122 00:54:21.526426  279486 start.go:293] postStartSetup for "test-preload-193194" (driver="kvm2")
	I1122 00:54:21.526439  279486 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1122 00:54:21.526519  279486 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1122 00:54:21.529637  279486 main.go:143] libmachine: domain test-preload-193194 has defined MAC address 52:54:00:45:9e:73 in network mk-test-preload-193194
	I1122 00:54:21.530089  279486 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:45:9e:73", ip: ""} in network mk-test-preload-193194: {Iface:virbr1 ExpiryTime:2025-11-22 01:54:16 +0000 UTC Type:0 Mac:52:54:00:45:9e:73 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:test-preload-193194 Clientid:01:52:54:00:45:9e:73}
	I1122 00:54:21.530131  279486 main.go:143] libmachine: domain test-preload-193194 has defined IP address 192.168.39.70 and MAC address 52:54:00:45:9e:73 in network mk-test-preload-193194
	I1122 00:54:21.530290  279486 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/test-preload-193194/id_rsa Username:docker}
	I1122 00:54:21.616845  279486 ssh_runner.go:195] Run: cat /etc/os-release
	I1122 00:54:21.622409  279486 info.go:137] Remote host: Buildroot 2025.02
	I1122 00:54:21.622446  279486 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-244751/.minikube/addons for local assets ...
	I1122 00:54:21.622513  279486 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-244751/.minikube/files for local assets ...
	I1122 00:54:21.622608  279486 filesync.go:149] local asset: /home/jenkins/minikube-integration/21934-244751/.minikube/files/etc/ssl/certs/2506642.pem -> 2506642.pem in /etc/ssl/certs
	I1122 00:54:21.622729  279486 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1122 00:54:21.635541  279486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/files/etc/ssl/certs/2506642.pem --> /etc/ssl/certs/2506642.pem (1708 bytes)
	I1122 00:54:21.672462  279486 start.go:296] duration metric: took 146.015316ms for postStartSetup
	I1122 00:54:21.672519  279486 fix.go:56] duration metric: took 17.664808426s for fixHost
	I1122 00:54:21.676160  279486 main.go:143] libmachine: domain test-preload-193194 has defined MAC address 52:54:00:45:9e:73 in network mk-test-preload-193194
	I1122 00:54:21.676655  279486 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:45:9e:73", ip: ""} in network mk-test-preload-193194: {Iface:virbr1 ExpiryTime:2025-11-22 01:54:16 +0000 UTC Type:0 Mac:52:54:00:45:9e:73 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:test-preload-193194 Clientid:01:52:54:00:45:9e:73}
	I1122 00:54:21.676712  279486 main.go:143] libmachine: domain test-preload-193194 has defined IP address 192.168.39.70 and MAC address 52:54:00:45:9e:73 in network mk-test-preload-193194
	I1122 00:54:21.676920  279486 main.go:143] libmachine: Using SSH client type: native
	I1122 00:54:21.677250  279486 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.70 22 <nil> <nil>}
	I1122 00:54:21.677270  279486 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1122 00:54:21.790516  279486 main.go:143] libmachine: SSH cmd err, output: <nil>: 1763772861.744515097
	
	I1122 00:54:21.790543  279486 fix.go:216] guest clock: 1763772861.744515097
	I1122 00:54:21.790553  279486 fix.go:229] Guest: 2025-11-22 00:54:21.744515097 +0000 UTC Remote: 2025-11-22 00:54:21.672525176 +0000 UTC m=+20.407966577 (delta=71.989921ms)
	I1122 00:54:21.790578  279486 fix.go:200] guest clock delta is within tolerance: 71.989921ms
	I1122 00:54:21.790585  279486 start.go:83] releasing machines lock for "test-preload-193194", held for 17.782909753s
	I1122 00:54:21.793608  279486 main.go:143] libmachine: domain test-preload-193194 has defined MAC address 52:54:00:45:9e:73 in network mk-test-preload-193194
	I1122 00:54:21.794036  279486 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:45:9e:73", ip: ""} in network mk-test-preload-193194: {Iface:virbr1 ExpiryTime:2025-11-22 01:54:16 +0000 UTC Type:0 Mac:52:54:00:45:9e:73 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:test-preload-193194 Clientid:01:52:54:00:45:9e:73}
	I1122 00:54:21.794068  279486 main.go:143] libmachine: domain test-preload-193194 has defined IP address 192.168.39.70 and MAC address 52:54:00:45:9e:73 in network mk-test-preload-193194
	I1122 00:54:21.794631  279486 ssh_runner.go:195] Run: cat /version.json
	I1122 00:54:21.794720  279486 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1122 00:54:21.797796  279486 main.go:143] libmachine: domain test-preload-193194 has defined MAC address 52:54:00:45:9e:73 in network mk-test-preload-193194
	I1122 00:54:21.797930  279486 main.go:143] libmachine: domain test-preload-193194 has defined MAC address 52:54:00:45:9e:73 in network mk-test-preload-193194
	I1122 00:54:21.798319  279486 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:45:9e:73", ip: ""} in network mk-test-preload-193194: {Iface:virbr1 ExpiryTime:2025-11-22 01:54:16 +0000 UTC Type:0 Mac:52:54:00:45:9e:73 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:test-preload-193194 Clientid:01:52:54:00:45:9e:73}
	I1122 00:54:21.798377  279486 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:45:9e:73", ip: ""} in network mk-test-preload-193194: {Iface:virbr1 ExpiryTime:2025-11-22 01:54:16 +0000 UTC Type:0 Mac:52:54:00:45:9e:73 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:test-preload-193194 Clientid:01:52:54:00:45:9e:73}
	I1122 00:54:21.798401  279486 main.go:143] libmachine: domain test-preload-193194 has defined IP address 192.168.39.70 and MAC address 52:54:00:45:9e:73 in network mk-test-preload-193194
	I1122 00:54:21.798450  279486 main.go:143] libmachine: domain test-preload-193194 has defined IP address 192.168.39.70 and MAC address 52:54:00:45:9e:73 in network mk-test-preload-193194
	I1122 00:54:21.798564  279486 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/test-preload-193194/id_rsa Username:docker}
	I1122 00:54:21.798747  279486 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/test-preload-193194/id_rsa Username:docker}
	I1122 00:54:21.906488  279486 ssh_runner.go:195] Run: systemctl --version
	I1122 00:54:21.913373  279486 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1122 00:54:22.059521  279486 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1122 00:54:22.067075  279486 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1122 00:54:22.067167  279486 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1122 00:54:22.092170  279486 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1122 00:54:22.092196  279486 start.go:496] detecting cgroup driver to use...
	I1122 00:54:22.092276  279486 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1122 00:54:22.114693  279486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1122 00:54:22.133868  279486 docker.go:218] disabling cri-docker service (if available) ...
	I1122 00:54:22.133933  279486 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1122 00:54:22.152936  279486 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1122 00:54:22.170377  279486 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1122 00:54:22.322066  279486 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1122 00:54:22.552374  279486 docker.go:234] disabling docker service ...
	I1122 00:54:22.552456  279486 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1122 00:54:22.569764  279486 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1122 00:54:22.585524  279486 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1122 00:54:22.747420  279486 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1122 00:54:22.899934  279486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1122 00:54:22.917174  279486 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1122 00:54:22.941660  279486 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1122 00:54:22.941752  279486 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:54:22.954632  279486 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1122 00:54:22.954715  279486 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:54:22.967878  279486 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:54:22.980633  279486 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:54:22.993566  279486 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1122 00:54:23.008455  279486 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:54:23.021413  279486 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:54:23.043444  279486 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:54:23.056276  279486 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1122 00:54:23.067262  279486 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1122 00:54:23.067336  279486 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1122 00:54:23.089176  279486 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1122 00:54:23.102468  279486 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:54:23.248419  279486 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1122 00:54:23.370257  279486 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1122 00:54:23.370345  279486 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1122 00:54:23.376592  279486 start.go:564] Will wait 60s for crictl version
	I1122 00:54:23.376696  279486 ssh_runner.go:195] Run: which crictl
	I1122 00:54:23.381589  279486 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1122 00:54:23.416631  279486 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1122 00:54:23.416753  279486 ssh_runner.go:195] Run: crio --version
	I1122 00:54:23.447977  279486 ssh_runner.go:195] Run: crio --version
	I1122 00:54:23.481523  279486 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I1122 00:54:23.485904  279486 main.go:143] libmachine: domain test-preload-193194 has defined MAC address 52:54:00:45:9e:73 in network mk-test-preload-193194
	I1122 00:54:23.486332  279486 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:45:9e:73", ip: ""} in network mk-test-preload-193194: {Iface:virbr1 ExpiryTime:2025-11-22 01:54:16 +0000 UTC Type:0 Mac:52:54:00:45:9e:73 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:test-preload-193194 Clientid:01:52:54:00:45:9e:73}
	I1122 00:54:23.486357  279486 main.go:143] libmachine: domain test-preload-193194 has defined IP address 192.168.39.70 and MAC address 52:54:00:45:9e:73 in network mk-test-preload-193194
	I1122 00:54:23.486557  279486 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1122 00:54:23.491517  279486 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:54:23.507843  279486 kubeadm.go:884] updating cluster {Name:test-preload-193194 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.32.0 ClusterName:test-preload-193194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.70 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1122 00:54:23.507965  279486 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1122 00:54:23.508009  279486 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:54:23.553301  279486 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I1122 00:54:23.553391  279486 ssh_runner.go:195] Run: which lz4
	I1122 00:54:23.558345  279486 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1122 00:54:23.563865  279486 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1122 00:54:23.563912  279486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I1122 00:54:25.195061  279486 crio.go:462] duration metric: took 1.636759807s to copy over tarball
	I1122 00:54:25.195167  279486 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1122 00:54:26.890708  279486 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.695478495s)
	I1122 00:54:26.890749  279486 crio.go:469] duration metric: took 1.695655803s to extract the tarball
	I1122 00:54:26.890759  279486 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1122 00:54:26.931935  279486 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:54:26.972326  279486 crio.go:514] all images are preloaded for cri-o runtime.
	I1122 00:54:26.972355  279486 cache_images.go:86] Images are preloaded, skipping loading
	I1122 00:54:26.972367  279486 kubeadm.go:935] updating node { 192.168.39.70 8443 v1.32.0 crio true true} ...
	I1122 00:54:26.972495  279486 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-193194 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.70
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:test-preload-193194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1122 00:54:26.972583  279486 ssh_runner.go:195] Run: crio config
	I1122 00:54:27.025802  279486 cni.go:84] Creating CNI manager for ""
	I1122 00:54:27.025835  279486 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1122 00:54:27.025857  279486 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1122 00:54:27.025887  279486 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.70 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-193194 NodeName:test-preload-193194 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.70"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.70 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1122 00:54:27.026055  279486 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.70
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-193194"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.70"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.70"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1122 00:54:27.026140  279486 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1122 00:54:27.038970  279486 binaries.go:51] Found k8s binaries, skipping transfer
	I1122 00:54:27.039062  279486 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1122 00:54:27.051583  279486 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1122 00:54:27.074188  279486 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1122 00:54:27.096430  279486 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1122 00:54:27.119997  279486 ssh_runner.go:195] Run: grep 192.168.39.70	control-plane.minikube.internal$ /etc/hosts
	I1122 00:54:27.124753  279486 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.70	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:54:27.141126  279486 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:54:27.290857  279486 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:54:27.312370  279486 certs.go:69] Setting up /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/test-preload-193194 for IP: 192.168.39.70
	I1122 00:54:27.312395  279486 certs.go:195] generating shared ca certs ...
	I1122 00:54:27.312419  279486 certs.go:227] acquiring lock for ca certs: {Name:mk43fa762c6315605485300e07cf83f8f357f8dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:54:27.312616  279486 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21934-244751/.minikube/ca.key
	I1122 00:54:27.312710  279486 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21934-244751/.minikube/proxy-client-ca.key
	I1122 00:54:27.312737  279486 certs.go:257] generating profile certs ...
	I1122 00:54:27.312859  279486 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/test-preload-193194/client.key
	I1122 00:54:27.312980  279486 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/test-preload-193194/apiserver.key.b6950778
	I1122 00:54:27.313040  279486 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/test-preload-193194/proxy-client.key
	I1122 00:54:27.313191  279486 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-244751/.minikube/certs/250664.pem (1338 bytes)
	W1122 00:54:27.313231  279486 certs.go:480] ignoring /home/jenkins/minikube-integration/21934-244751/.minikube/certs/250664_empty.pem, impossibly tiny 0 bytes
	I1122 00:54:27.313246  279486 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-244751/.minikube/certs/ca-key.pem (1675 bytes)
	I1122 00:54:27.313283  279486 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-244751/.minikube/certs/ca.pem (1078 bytes)
	I1122 00:54:27.313315  279486 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-244751/.minikube/certs/cert.pem (1123 bytes)
	I1122 00:54:27.313350  279486 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-244751/.minikube/certs/key.pem (1679 bytes)
	I1122 00:54:27.313477  279486 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-244751/.minikube/files/etc/ssl/certs/2506642.pem (1708 bytes)
	I1122 00:54:27.314329  279486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1122 00:54:27.360023  279486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1122 00:54:27.401279  279486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1122 00:54:27.436013  279486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1122 00:54:27.470717  279486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/test-preload-193194/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1122 00:54:27.505157  279486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/test-preload-193194/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1122 00:54:27.537988  279486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/test-preload-193194/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1122 00:54:27.570713  279486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/test-preload-193194/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1122 00:54:27.602320  279486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/files/etc/ssl/certs/2506642.pem --> /usr/share/ca-certificates/2506642.pem (1708 bytes)
	I1122 00:54:27.632284  279486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1122 00:54:27.662758  279486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/certs/250664.pem --> /usr/share/ca-certificates/250664.pem (1338 bytes)
	I1122 00:54:27.692815  279486 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1122 00:54:27.714188  279486 ssh_runner.go:195] Run: openssl version
	I1122 00:54:27.721305  279486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2506642.pem && ln -fs /usr/share/ca-certificates/2506642.pem /etc/ssl/certs/2506642.pem"
	I1122 00:54:27.735766  279486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2506642.pem
	I1122 00:54:27.741481  279486 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 23:59 /usr/share/ca-certificates/2506642.pem
	I1122 00:54:27.741550  279486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2506642.pem
	I1122 00:54:27.749401  279486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2506642.pem /etc/ssl/certs/3ec20f2e.0"
	I1122 00:54:27.763600  279486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1122 00:54:27.777597  279486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:54:27.783283  279486 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 23:47 /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:54:27.783339  279486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:54:27.790779  279486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1122 00:54:27.805400  279486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/250664.pem && ln -fs /usr/share/ca-certificates/250664.pem /etc/ssl/certs/250664.pem"
	I1122 00:54:27.819769  279486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/250664.pem
	I1122 00:54:27.825592  279486 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 23:59 /usr/share/ca-certificates/250664.pem
	I1122 00:54:27.825663  279486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/250664.pem
	I1122 00:54:27.833163  279486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/250664.pem /etc/ssl/certs/51391683.0"
	I1122 00:54:27.847421  279486 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1122 00:54:27.852850  279486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1122 00:54:27.860637  279486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1122 00:54:27.868267  279486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1122 00:54:27.875991  279486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1122 00:54:27.883485  279486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1122 00:54:27.891226  279486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1122 00:54:27.898608  279486 kubeadm.go:401] StartCluster: {Name:test-preload-193194 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
32.0 ClusterName:test-preload-193194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.70 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:54:27.898755  279486 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1122 00:54:27.898822  279486 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1122 00:54:27.934540  279486 cri.go:89] found id: ""
	I1122 00:54:27.934617  279486 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1122 00:54:27.948268  279486 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1122 00:54:27.948297  279486 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1122 00:54:27.948354  279486 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1122 00:54:27.961103  279486 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1122 00:54:27.961638  279486 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-193194" does not appear in /home/jenkins/minikube-integration/21934-244751/kubeconfig
	I1122 00:54:27.961833  279486 kubeconfig.go:62] /home/jenkins/minikube-integration/21934-244751/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-193194" cluster setting kubeconfig missing "test-preload-193194" context setting]
	I1122 00:54:27.962139  279486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-244751/kubeconfig: {Name:mkbde37dbfe874aace118914fefd91b607e3afff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:54:27.962654  279486 kapi.go:59] client config for test-preload-193194: &rest.Config{Host:"https://192.168.39.70:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21934-244751/.minikube/profiles/test-preload-193194/client.crt", KeyFile:"/home/jenkins/minikube-integration/21934-244751/.minikube/profiles/test-preload-193194/client.key", CAFile:"/home/jenkins/minikube-integration/21934-244751/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2814ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1122 00:54:27.963155  279486 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1122 00:54:27.963170  279486 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1122 00:54:27.963176  279486 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1122 00:54:27.963180  279486 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1122 00:54:27.963183  279486 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1122 00:54:27.963562  279486 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1122 00:54:27.976223  279486 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.39.70
	I1122 00:54:27.976265  279486 kubeadm.go:1161] stopping kube-system containers ...
	I1122 00:54:27.976283  279486 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1122 00:54:27.976361  279486 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1122 00:54:28.011510  279486 cri.go:89] found id: ""
	I1122 00:54:28.011583  279486 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1122 00:54:28.036277  279486 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1122 00:54:28.049379  279486 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1122 00:54:28.049407  279486 kubeadm.go:158] found existing configuration files:
	
	I1122 00:54:28.049466  279486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1122 00:54:28.061118  279486 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1122 00:54:28.061214  279486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1122 00:54:28.074707  279486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1122 00:54:28.086445  279486 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1122 00:54:28.086514  279486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1122 00:54:28.098460  279486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1122 00:54:28.109584  279486 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1122 00:54:28.109644  279486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1122 00:54:28.121605  279486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1122 00:54:28.132433  279486 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1122 00:54:28.132488  279486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1122 00:54:28.143877  279486 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1122 00:54:28.155832  279486 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1122 00:54:28.214542  279486 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1122 00:54:29.301625  279486 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.087047333s)
	I1122 00:54:29.301722  279486 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1122 00:54:29.584817  279486 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1122 00:54:29.651543  279486 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1122 00:54:29.739491  279486 api_server.go:52] waiting for apiserver process to appear ...
	I1122 00:54:29.739579  279486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:54:30.239783  279486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:54:30.740165  279486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:54:31.240438  279486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:54:31.740409  279486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:54:32.240430  279486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:54:32.276932  279486 api_server.go:72] duration metric: took 2.537456832s to wait for apiserver process to appear ...
	I1122 00:54:32.276976  279486 api_server.go:88] waiting for apiserver healthz status ...
	I1122 00:54:32.276998  279486 api_server.go:253] Checking apiserver healthz at https://192.168.39.70:8443/healthz ...
	I1122 00:54:35.091282  279486 api_server.go:279] https://192.168.39.70:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1122 00:54:35.091318  279486 api_server.go:103] status: https://192.168.39.70:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1122 00:54:35.091339  279486 api_server.go:253] Checking apiserver healthz at https://192.168.39.70:8443/healthz ...
	I1122 00:54:35.128132  279486 api_server.go:279] https://192.168.39.70:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1122 00:54:35.128174  279486 api_server.go:103] status: https://192.168.39.70:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1122 00:54:35.277560  279486 api_server.go:253] Checking apiserver healthz at https://192.168.39.70:8443/healthz ...
	I1122 00:54:35.286327  279486 api_server.go:279] https://192.168.39.70:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1122 00:54:35.286364  279486 api_server.go:103] status: https://192.168.39.70:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1122 00:54:35.778094  279486 api_server.go:253] Checking apiserver healthz at https://192.168.39.70:8443/healthz ...
	I1122 00:54:35.782719  279486 api_server.go:279] https://192.168.39.70:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1122 00:54:35.782745  279486 api_server.go:103] status: https://192.168.39.70:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1122 00:54:36.277376  279486 api_server.go:253] Checking apiserver healthz at https://192.168.39.70:8443/healthz ...
	I1122 00:54:36.285947  279486 api_server.go:279] https://192.168.39.70:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1122 00:54:36.285981  279486 api_server.go:103] status: https://192.168.39.70:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1122 00:54:36.777982  279486 api_server.go:253] Checking apiserver healthz at https://192.168.39.70:8443/healthz ...
	I1122 00:54:36.784161  279486 api_server.go:279] https://192.168.39.70:8443/healthz returned 200:
	ok
	I1122 00:54:36.792172  279486 api_server.go:141] control plane version: v1.32.0
	I1122 00:54:36.792204  279486 api_server.go:131] duration metric: took 4.515220596s to wait for apiserver health ...
	I1122 00:54:36.792215  279486 cni.go:84] Creating CNI manager for ""
	I1122 00:54:36.792223  279486 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1122 00:54:36.793930  279486 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1122 00:54:36.795285  279486 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1122 00:54:36.822429  279486 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1122 00:54:36.862196  279486 system_pods.go:43] waiting for kube-system pods to appear ...
	I1122 00:54:36.873439  279486 system_pods.go:59] 7 kube-system pods found
	I1122 00:54:36.873475  279486 system_pods.go:61] "coredns-668d6bf9bc-kt2h2" [929d3254-ff83-4e09-b17d-8de8212c85c8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:54:36.873483  279486 system_pods.go:61] "etcd-test-preload-193194" [71367410-e418-4ee1-96a4-ee0afcd1fae3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1122 00:54:36.873492  279486 system_pods.go:61] "kube-apiserver-test-preload-193194" [270e1221-959c-4d44-98de-e021fcef6bab] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1122 00:54:36.873501  279486 system_pods.go:61] "kube-controller-manager-test-preload-193194" [f86268a7-1bb9-42c1-a41a-9bb9442fe2b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1122 00:54:36.873509  279486 system_pods.go:61] "kube-proxy-b4x45" [4d6b3685-bb30-4389-b236-db63a4a31dbc] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1122 00:54:36.873516  279486 system_pods.go:61] "kube-scheduler-test-preload-193194" [4b2e5757-bc33-4140-9df1-57b23c66db63] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1122 00:54:36.873528  279486 system_pods.go:61] "storage-provisioner" [f19b2824-1993-4859-b367-9dd74d4a1a0e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:54:36.873540  279486 system_pods.go:74] duration metric: took 11.315451ms to wait for pod list to return data ...
	I1122 00:54:36.873554  279486 node_conditions.go:102] verifying NodePressure condition ...
	I1122 00:54:36.882522  279486 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1122 00:54:36.882550  279486 node_conditions.go:123] node cpu capacity is 2
	I1122 00:54:36.882567  279486 node_conditions.go:105] duration metric: took 9.007959ms to run NodePressure ...
	I1122 00:54:36.882639  279486 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1122 00:54:37.163430  279486 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1122 00:54:37.168792  279486 kubeadm.go:744] kubelet initialised
	I1122 00:54:37.168821  279486 kubeadm.go:745] duration metric: took 5.356388ms waiting for restarted kubelet to initialise ...
	I1122 00:54:37.168843  279486 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1122 00:54:37.188687  279486 ops.go:34] apiserver oom_adj: -16
	I1122 00:54:37.188721  279486 kubeadm.go:602] duration metric: took 9.240415463s to restartPrimaryControlPlane
	I1122 00:54:37.188737  279486 kubeadm.go:403] duration metric: took 9.29013926s to StartCluster
	I1122 00:54:37.188764  279486 settings.go:142] acquiring lock: {Name:mkd124ec98418d6d2386a8f1a0e2e5ff6f0f99d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:54:37.188870  279486 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21934-244751/kubeconfig
	I1122 00:54:37.189456  279486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-244751/kubeconfig: {Name:mkbde37dbfe874aace118914fefd91b607e3afff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:54:37.189772  279486 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.70 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1122 00:54:37.189909  279486 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1122 00:54:37.190030  279486 addons.go:70] Setting storage-provisioner=true in profile "test-preload-193194"
	I1122 00:54:37.190054  279486 addons.go:239] Setting addon storage-provisioner=true in "test-preload-193194"
	I1122 00:54:37.190056  279486 config.go:182] Loaded profile config "test-preload-193194": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	W1122 00:54:37.190069  279486 addons.go:248] addon storage-provisioner should already be in state true
	I1122 00:54:37.190107  279486 host.go:66] Checking if "test-preload-193194" exists ...
	I1122 00:54:37.190182  279486 addons.go:70] Setting default-storageclass=true in profile "test-preload-193194"
	I1122 00:54:37.190268  279486 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "test-preload-193194"
	I1122 00:54:37.192920  279486 kapi.go:59] client config for test-preload-193194: &rest.Config{Host:"https://192.168.39.70:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21934-244751/.minikube/profiles/test-preload-193194/client.crt", KeyFile:"/home/jenkins/minikube-integration/21934-244751/.minikube/profiles/test-preload-193194/client.key", CAFile:"/home/jenkins/minikube-integration/21934-244751/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2814ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1122 00:54:37.193295  279486 addons.go:239] Setting addon default-storageclass=true in "test-preload-193194"
	W1122 00:54:37.193313  279486 addons.go:248] addon default-storageclass should already be in state true
	I1122 00:54:37.193337  279486 host.go:66] Checking if "test-preload-193194" exists ...
	I1122 00:54:37.193904  279486 out.go:179] * Verifying Kubernetes components...
	I1122 00:54:37.194919  279486 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1122 00:54:37.194930  279486 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1122 00:54:37.194949  279486 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1122 00:54:37.195859  279486 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:54:37.196595  279486 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:54:37.196616  279486 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1122 00:54:37.198428  279486 main.go:143] libmachine: domain test-preload-193194 has defined MAC address 52:54:00:45:9e:73 in network mk-test-preload-193194
	I1122 00:54:37.198877  279486 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:45:9e:73", ip: ""} in network mk-test-preload-193194: {Iface:virbr1 ExpiryTime:2025-11-22 01:54:16 +0000 UTC Type:0 Mac:52:54:00:45:9e:73 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:test-preload-193194 Clientid:01:52:54:00:45:9e:73}
	I1122 00:54:37.198936  279486 main.go:143] libmachine: domain test-preload-193194 has defined IP address 192.168.39.70 and MAC address 52:54:00:45:9e:73 in network mk-test-preload-193194
	I1122 00:54:37.199120  279486 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/test-preload-193194/id_rsa Username:docker}
	I1122 00:54:37.199700  279486 main.go:143] libmachine: domain test-preload-193194 has defined MAC address 52:54:00:45:9e:73 in network mk-test-preload-193194
	I1122 00:54:37.200170  279486 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:45:9e:73", ip: ""} in network mk-test-preload-193194: {Iface:virbr1 ExpiryTime:2025-11-22 01:54:16 +0000 UTC Type:0 Mac:52:54:00:45:9e:73 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:test-preload-193194 Clientid:01:52:54:00:45:9e:73}
	I1122 00:54:37.200210  279486 main.go:143] libmachine: domain test-preload-193194 has defined IP address 192.168.39.70 and MAC address 52:54:00:45:9e:73 in network mk-test-preload-193194
	I1122 00:54:37.200422  279486 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/test-preload-193194/id_rsa Username:docker}
	I1122 00:54:37.470845  279486 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:54:37.495969  279486 node_ready.go:35] waiting up to 6m0s for node "test-preload-193194" to be "Ready" ...
	I1122 00:54:37.499109  279486 node_ready.go:49] node "test-preload-193194" is "Ready"
	I1122 00:54:37.499147  279486 node_ready.go:38] duration metric: took 3.119872ms for node "test-preload-193194" to be "Ready" ...
	I1122 00:54:37.499168  279486 api_server.go:52] waiting for apiserver process to appear ...
	I1122 00:54:37.499230  279486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:54:37.519171  279486 api_server.go:72] duration metric: took 329.358483ms to wait for apiserver process to appear ...
	I1122 00:54:37.519198  279486 api_server.go:88] waiting for apiserver healthz status ...
	I1122 00:54:37.519218  279486 api_server.go:253] Checking apiserver healthz at https://192.168.39.70:8443/healthz ...
	I1122 00:54:37.525053  279486 api_server.go:279] https://192.168.39.70:8443/healthz returned 200:
	ok
	I1122 00:54:37.525843  279486 api_server.go:141] control plane version: v1.32.0
	I1122 00:54:37.525879  279486 api_server.go:131] duration metric: took 6.662105ms to wait for apiserver health ...
	I1122 00:54:37.525893  279486 system_pods.go:43] waiting for kube-system pods to appear ...
	I1122 00:54:37.530582  279486 system_pods.go:59] 7 kube-system pods found
	I1122 00:54:37.530622  279486 system_pods.go:61] "coredns-668d6bf9bc-kt2h2" [929d3254-ff83-4e09-b17d-8de8212c85c8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:54:37.530634  279486 system_pods.go:61] "etcd-test-preload-193194" [71367410-e418-4ee1-96a4-ee0afcd1fae3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1122 00:54:37.530648  279486 system_pods.go:61] "kube-apiserver-test-preload-193194" [270e1221-959c-4d44-98de-e021fcef6bab] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1122 00:54:37.530660  279486 system_pods.go:61] "kube-controller-manager-test-preload-193194" [f86268a7-1bb9-42c1-a41a-9bb9442fe2b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1122 00:54:37.530668  279486 system_pods.go:61] "kube-proxy-b4x45" [4d6b3685-bb30-4389-b236-db63a4a31dbc] Running
	I1122 00:54:37.530695  279486 system_pods.go:61] "kube-scheduler-test-preload-193194" [4b2e5757-bc33-4140-9df1-57b23c66db63] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1122 00:54:37.530703  279486 system_pods.go:61] "storage-provisioner" [f19b2824-1993-4859-b367-9dd74d4a1a0e] Running
	I1122 00:54:37.530712  279486 system_pods.go:74] duration metric: took 4.806365ms to wait for pod list to return data ...
	I1122 00:54:37.530724  279486 default_sa.go:34] waiting for default service account to be created ...
	I1122 00:54:37.535099  279486 default_sa.go:45] found service account: "default"
	I1122 00:54:37.535124  279486 default_sa.go:55] duration metric: took 4.392745ms for default service account to be created ...
	I1122 00:54:37.535136  279486 system_pods.go:116] waiting for k8s-apps to be running ...
	I1122 00:54:37.547582  279486 system_pods.go:86] 7 kube-system pods found
	I1122 00:54:37.547624  279486 system_pods.go:89] "coredns-668d6bf9bc-kt2h2" [929d3254-ff83-4e09-b17d-8de8212c85c8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:54:37.547636  279486 system_pods.go:89] "etcd-test-preload-193194" [71367410-e418-4ee1-96a4-ee0afcd1fae3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1122 00:54:37.547647  279486 system_pods.go:89] "kube-apiserver-test-preload-193194" [270e1221-959c-4d44-98de-e021fcef6bab] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1122 00:54:37.547654  279486 system_pods.go:89] "kube-controller-manager-test-preload-193194" [f86268a7-1bb9-42c1-a41a-9bb9442fe2b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1122 00:54:37.547659  279486 system_pods.go:89] "kube-proxy-b4x45" [4d6b3685-bb30-4389-b236-db63a4a31dbc] Running
	I1122 00:54:37.547666  279486 system_pods.go:89] "kube-scheduler-test-preload-193194" [4b2e5757-bc33-4140-9df1-57b23c66db63] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1122 00:54:37.547693  279486 system_pods.go:89] "storage-provisioner" [f19b2824-1993-4859-b367-9dd74d4a1a0e] Running
	I1122 00:54:37.547710  279486 system_pods.go:126] duration metric: took 12.564628ms to wait for k8s-apps to be running ...
	I1122 00:54:37.547719  279486 system_svc.go:44] waiting for kubelet service to be running ....
	I1122 00:54:37.547782  279486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:54:37.565548  279486 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1122 00:54:37.575725  279486 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:54:37.582642  279486 system_svc.go:56] duration metric: took 34.912968ms WaitForService to wait for kubelet
	I1122 00:54:37.582687  279486 kubeadm.go:587] duration metric: took 392.868273ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:54:37.582712  279486 node_conditions.go:102] verifying NodePressure condition ...
	I1122 00:54:37.587522  279486 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1122 00:54:37.587546  279486 node_conditions.go:123] node cpu capacity is 2
	I1122 00:54:37.587561  279486 node_conditions.go:105] duration metric: took 4.843419ms to run NodePressure ...
	I1122 00:54:37.587577  279486 start.go:242] waiting for startup goroutines ...
	I1122 00:54:38.288564  279486 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1122 00:54:38.290389  279486 addons.go:530] duration metric: took 1.100487663s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1122 00:54:38.290460  279486 start.go:247] waiting for cluster config update ...
	I1122 00:54:38.290480  279486 start.go:256] writing updated cluster config ...
	I1122 00:54:38.290844  279486 ssh_runner.go:195] Run: rm -f paused
	I1122 00:54:38.300156  279486 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:54:38.300820  279486 kapi.go:59] client config for test-preload-193194: &rest.Config{Host:"https://192.168.39.70:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21934-244751/.minikube/profiles/test-preload-193194/client.crt", KeyFile:"/home/jenkins/minikube-integration/21934-244751/.minikube/profiles/test-preload-193194/client.key", CAFile:"/home/jenkins/minikube-integration/21934-244751/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2814ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1122 00:54:38.304516  279486 pod_ready.go:83] waiting for pod "coredns-668d6bf9bc-kt2h2" in "kube-system" namespace to be "Ready" or be gone ...
	W1122 00:54:40.311147  279486 pod_ready.go:104] pod "coredns-668d6bf9bc-kt2h2" is not "Ready", error: <nil>
	W1122 00:54:42.811449  279486 pod_ready.go:104] pod "coredns-668d6bf9bc-kt2h2" is not "Ready", error: <nil>
	W1122 00:54:44.812898  279486 pod_ready.go:104] pod "coredns-668d6bf9bc-kt2h2" is not "Ready", error: <nil>
	W1122 00:54:47.311292  279486 pod_ready.go:104] pod "coredns-668d6bf9bc-kt2h2" is not "Ready", error: <nil>
	I1122 00:54:48.311321  279486 pod_ready.go:94] pod "coredns-668d6bf9bc-kt2h2" is "Ready"
	I1122 00:54:48.311362  279486 pod_ready.go:86] duration metric: took 10.006812071s for pod "coredns-668d6bf9bc-kt2h2" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:54:48.314750  279486 pod_ready.go:83] waiting for pod "etcd-test-preload-193194" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:54:48.319505  279486 pod_ready.go:94] pod "etcd-test-preload-193194" is "Ready"
	I1122 00:54:48.319538  279486 pod_ready.go:86] duration metric: took 4.758922ms for pod "etcd-test-preload-193194" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:54:48.321718  279486 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-193194" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:54:48.327230  279486 pod_ready.go:94] pod "kube-apiserver-test-preload-193194" is "Ready"
	I1122 00:54:48.327259  279486 pod_ready.go:86] duration metric: took 5.509978ms for pod "kube-apiserver-test-preload-193194" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:54:48.329362  279486 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-193194" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:54:48.909598  279486 pod_ready.go:94] pod "kube-controller-manager-test-preload-193194" is "Ready"
	I1122 00:54:48.909626  279486 pod_ready.go:86] duration metric: took 580.235423ms for pod "kube-controller-manager-test-preload-193194" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:54:49.108855  279486 pod_ready.go:83] waiting for pod "kube-proxy-b4x45" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:54:49.509110  279486 pod_ready.go:94] pod "kube-proxy-b4x45" is "Ready"
	I1122 00:54:49.509139  279486 pod_ready.go:86] duration metric: took 400.258198ms for pod "kube-proxy-b4x45" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:54:49.709026  279486 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-193194" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:54:50.108814  279486 pod_ready.go:94] pod "kube-scheduler-test-preload-193194" is "Ready"
	I1122 00:54:50.108847  279486 pod_ready.go:86] duration metric: took 399.789699ms for pod "kube-scheduler-test-preload-193194" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:54:50.108862  279486 pod_ready.go:40] duration metric: took 11.808672573s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:54:50.154281  279486 start.go:628] kubectl: 1.34.2, cluster: 1.32.0 (minor skew: 2)
	I1122 00:54:50.156101  279486 out.go:203] 
	W1122 00:54:50.157405  279486 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.32.0.
	I1122 00:54:50.158511  279486 out.go:179]   - Want kubectl v1.32.0? Try 'minikube kubectl -- get pods -A'
	I1122 00:54:50.159663  279486 out.go:179] * Done! kubectl is now configured to use "test-preload-193194" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 22 00:54:50 test-preload-193194 crio[845]: time="2025-11-22 00:54:50.945573848Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763772890945447124,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9cd5a78e-b36b-4bd9-82ff-b9ae16cbf2f8 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 22 00:54:50 test-preload-193194 crio[845]: time="2025-11-22 00:54:50.947259337Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=990a9543-eeb9-48e7-a34e-2ecf95f02982 name=/runtime.v1.RuntimeService/ListContainers
	Nov 22 00:54:50 test-preload-193194 crio[845]: time="2025-11-22 00:54:50.947313207Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=990a9543-eeb9-48e7-a34e-2ecf95f02982 name=/runtime.v1.RuntimeService/ListContainers
	Nov 22 00:54:50 test-preload-193194 crio[845]: time="2025-11-22 00:54:50.947485077Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b30cac06b7a0b44abd59437dc34ccc8374805b77549ca7a10899a1f517ba1725,PodSandboxId:5aeab08a65202978a8f6779244ba8b1685bb7f953f02ab4837bc357ef2e63eda,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1763772879776148957,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-kt2h2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 929d3254-ff83-4e09-b17d-8de8212c85c8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:918d5eead5bc5b24f3b51b9d316e46127ef1a262ebe67cc0d631f7891131cf73,PodSandboxId:eb7a2370dc1018e9fa16855c6b8f57a10c2462af097545fdb7c4eff6f897d7af,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1763772876137639630,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b4x45,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 4d6b3685-bb30-4389-b236-db63a4a31dbc,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bb29b79fe99cbbdf4fac0f45db8e7861317c13290014de70d605e77e0e42544,PodSandboxId:162700f7b54cbd06670c1d417e298cf798bd3c4c1af2dea2d797f73ebed4ff37,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763772876179957745,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1
9b2824-1993-4859-b367-9dd74d4a1a0e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf0a2e8d39af7cd16576968cb8bbd5bb49b12063b08250e82b95395bed899f07,PodSandboxId:ce084f6bf215fd8e2593b74f0f64ceec2c55ccc8992ef32775f4e0b6b0739a11,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1763772871776282953,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-193194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b06c29e03a30cf2bbe583856a0998a54,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf5795fcb7a3f02e51ab04f1a52df1d769c77b95f394871af2e6dcf1e23831c4,PodSandboxId:8399c64b3b5716c5812aadacef94e4781b49f84da4ce67b5a9adb462a6b22763,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1763772871730547701,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-193194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ad8c0347320d5b99cc8bf7
09e78b075,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23228b3b40501de06865ce9bf3315ea6d586811e65e4465d27ced85243c96e7e,PodSandboxId:667b4bb0edd20361541cb2776039259f8d82353dd5cf9c304793ad72450a2c25,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1763772871759334470,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-193194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3e5775c3af2fc5e8c236dab83d590f6,}
,Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f1b6adc607873032eec3570ee742a156166bc5ceecaacd798db72e353630daa,PodSandboxId:a2dd2a93f6c491e74a84b8363ab72a318adbf34905af0ec14add3ede4087ecbe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1763772871712994246,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-193194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d3c955054d228ff13bf79595ba554f5,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=990a9543-eeb9-48e7-a34e-2ecf95f02982 name=/runtime.v1.RuntimeService/ListContainers
	Nov 22 00:54:50 test-preload-193194 crio[845]: time="2025-11-22 00:54:50.985618986Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ba12c0f4-e0b4-48ff-8a55-4ac86d59e73e name=/runtime.v1.RuntimeService/Version
	Nov 22 00:54:50 test-preload-193194 crio[845]: time="2025-11-22 00:54:50.985757027Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ba12c0f4-e0b4-48ff-8a55-4ac86d59e73e name=/runtime.v1.RuntimeService/Version
	Nov 22 00:54:50 test-preload-193194 crio[845]: time="2025-11-22 00:54:50.988233564Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e3b5ba0e-f08c-4622-aff7-fdb013e52f32 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 22 00:54:50 test-preload-193194 crio[845]: time="2025-11-22 00:54:50.988907818Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763772890988883110,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e3b5ba0e-f08c-4622-aff7-fdb013e52f32 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 22 00:54:50 test-preload-193194 crio[845]: time="2025-11-22 00:54:50.990284934Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4709d302-8c19-4d36-9551-d9a7d06462ab name=/runtime.v1.RuntimeService/ListContainers
	Nov 22 00:54:50 test-preload-193194 crio[845]: time="2025-11-22 00:54:50.990439074Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4709d302-8c19-4d36-9551-d9a7d06462ab name=/runtime.v1.RuntimeService/ListContainers
	Nov 22 00:54:50 test-preload-193194 crio[845]: time="2025-11-22 00:54:50.990619470Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b30cac06b7a0b44abd59437dc34ccc8374805b77549ca7a10899a1f517ba1725,PodSandboxId:5aeab08a65202978a8f6779244ba8b1685bb7f953f02ab4837bc357ef2e63eda,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1763772879776148957,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-kt2h2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 929d3254-ff83-4e09-b17d-8de8212c85c8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:918d5eead5bc5b24f3b51b9d316e46127ef1a262ebe67cc0d631f7891131cf73,PodSandboxId:eb7a2370dc1018e9fa16855c6b8f57a10c2462af097545fdb7c4eff6f897d7af,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1763772876137639630,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b4x45,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 4d6b3685-bb30-4389-b236-db63a4a31dbc,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bb29b79fe99cbbdf4fac0f45db8e7861317c13290014de70d605e77e0e42544,PodSandboxId:162700f7b54cbd06670c1d417e298cf798bd3c4c1af2dea2d797f73ebed4ff37,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763772876179957745,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1
9b2824-1993-4859-b367-9dd74d4a1a0e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf0a2e8d39af7cd16576968cb8bbd5bb49b12063b08250e82b95395bed899f07,PodSandboxId:ce084f6bf215fd8e2593b74f0f64ceec2c55ccc8992ef32775f4e0b6b0739a11,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1763772871776282953,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-193194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b06c29e03a30cf2bbe583856a0998a54,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf5795fcb7a3f02e51ab04f1a52df1d769c77b95f394871af2e6dcf1e23831c4,PodSandboxId:8399c64b3b5716c5812aadacef94e4781b49f84da4ce67b5a9adb462a6b22763,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1763772871730547701,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-193194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ad8c0347320d5b99cc8bf7
09e78b075,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23228b3b40501de06865ce9bf3315ea6d586811e65e4465d27ced85243c96e7e,PodSandboxId:667b4bb0edd20361541cb2776039259f8d82353dd5cf9c304793ad72450a2c25,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1763772871759334470,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-193194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3e5775c3af2fc5e8c236dab83d590f6,}
,Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f1b6adc607873032eec3570ee742a156166bc5ceecaacd798db72e353630daa,PodSandboxId:a2dd2a93f6c491e74a84b8363ab72a318adbf34905af0ec14add3ede4087ecbe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1763772871712994246,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-193194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d3c955054d228ff13bf79595ba554f5,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4709d302-8c19-4d36-9551-d9a7d06462ab name=/runtime.v1.RuntimeService/ListContainers
	Nov 22 00:54:51 test-preload-193194 crio[845]: time="2025-11-22 00:54:51.025510223Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9beae5ba-0efc-4d61-a215-c1fd5fe2dac0 name=/runtime.v1.RuntimeService/Version
	Nov 22 00:54:51 test-preload-193194 crio[845]: time="2025-11-22 00:54:51.025579432Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9beae5ba-0efc-4d61-a215-c1fd5fe2dac0 name=/runtime.v1.RuntimeService/Version
	Nov 22 00:54:51 test-preload-193194 crio[845]: time="2025-11-22 00:54:51.026993491Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c3f2b497-bf2c-4355-b42b-f6e24adde83a name=/runtime.v1.ImageService/ImageFsInfo
	Nov 22 00:54:51 test-preload-193194 crio[845]: time="2025-11-22 00:54:51.027632890Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763772891027609830,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c3f2b497-bf2c-4355-b42b-f6e24adde83a name=/runtime.v1.ImageService/ImageFsInfo
	Nov 22 00:54:51 test-preload-193194 crio[845]: time="2025-11-22 00:54:51.029356607Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=edef973b-ba82-42f9-a075-4dcc9b2afff0 name=/runtime.v1.RuntimeService/ListContainers
	Nov 22 00:54:51 test-preload-193194 crio[845]: time="2025-11-22 00:54:51.029430204Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=edef973b-ba82-42f9-a075-4dcc9b2afff0 name=/runtime.v1.RuntimeService/ListContainers
	Nov 22 00:54:51 test-preload-193194 crio[845]: time="2025-11-22 00:54:51.029614354Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b30cac06b7a0b44abd59437dc34ccc8374805b77549ca7a10899a1f517ba1725,PodSandboxId:5aeab08a65202978a8f6779244ba8b1685bb7f953f02ab4837bc357ef2e63eda,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1763772879776148957,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-kt2h2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 929d3254-ff83-4e09-b17d-8de8212c85c8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:918d5eead5bc5b24f3b51b9d316e46127ef1a262ebe67cc0d631f7891131cf73,PodSandboxId:eb7a2370dc1018e9fa16855c6b8f57a10c2462af097545fdb7c4eff6f897d7af,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1763772876137639630,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b4x45,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 4d6b3685-bb30-4389-b236-db63a4a31dbc,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bb29b79fe99cbbdf4fac0f45db8e7861317c13290014de70d605e77e0e42544,PodSandboxId:162700f7b54cbd06670c1d417e298cf798bd3c4c1af2dea2d797f73ebed4ff37,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763772876179957745,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1
9b2824-1993-4859-b367-9dd74d4a1a0e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf0a2e8d39af7cd16576968cb8bbd5bb49b12063b08250e82b95395bed899f07,PodSandboxId:ce084f6bf215fd8e2593b74f0f64ceec2c55ccc8992ef32775f4e0b6b0739a11,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1763772871776282953,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-193194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b06c29e03a30cf2bbe583856a0998a54,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf5795fcb7a3f02e51ab04f1a52df1d769c77b95f394871af2e6dcf1e23831c4,PodSandboxId:8399c64b3b5716c5812aadacef94e4781b49f84da4ce67b5a9adb462a6b22763,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1763772871730547701,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-193194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ad8c0347320d5b99cc8bf7
09e78b075,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23228b3b40501de06865ce9bf3315ea6d586811e65e4465d27ced85243c96e7e,PodSandboxId:667b4bb0edd20361541cb2776039259f8d82353dd5cf9c304793ad72450a2c25,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1763772871759334470,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-193194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3e5775c3af2fc5e8c236dab83d590f6,}
,Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f1b6adc607873032eec3570ee742a156166bc5ceecaacd798db72e353630daa,PodSandboxId:a2dd2a93f6c491e74a84b8363ab72a318adbf34905af0ec14add3ede4087ecbe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1763772871712994246,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-193194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d3c955054d228ff13bf79595ba554f5,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=edef973b-ba82-42f9-a075-4dcc9b2afff0 name=/runtime.v1.RuntimeService/ListContainers
	Nov 22 00:54:51 test-preload-193194 crio[845]: time="2025-11-22 00:54:51.061242783Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=373e833c-0f11-4c8e-8a7e-f305eb043a9c name=/runtime.v1.RuntimeService/Version
	Nov 22 00:54:51 test-preload-193194 crio[845]: time="2025-11-22 00:54:51.061481969Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=373e833c-0f11-4c8e-8a7e-f305eb043a9c name=/runtime.v1.RuntimeService/Version
	Nov 22 00:54:51 test-preload-193194 crio[845]: time="2025-11-22 00:54:51.063850125Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4fdcb8c3-374d-421a-959f-80364d3c40f3 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 22 00:54:51 test-preload-193194 crio[845]: time="2025-11-22 00:54:51.064364474Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763772891064341301,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4fdcb8c3-374d-421a-959f-80364d3c40f3 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 22 00:54:51 test-preload-193194 crio[845]: time="2025-11-22 00:54:51.065752613Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0c084f05-9ee4-4adc-93dd-64b9be711909 name=/runtime.v1.RuntimeService/ListContainers
	Nov 22 00:54:51 test-preload-193194 crio[845]: time="2025-11-22 00:54:51.065822632Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0c084f05-9ee4-4adc-93dd-64b9be711909 name=/runtime.v1.RuntimeService/ListContainers
	Nov 22 00:54:51 test-preload-193194 crio[845]: time="2025-11-22 00:54:51.066602730Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b30cac06b7a0b44abd59437dc34ccc8374805b77549ca7a10899a1f517ba1725,PodSandboxId:5aeab08a65202978a8f6779244ba8b1685bb7f953f02ab4837bc357ef2e63eda,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1763772879776148957,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-kt2h2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 929d3254-ff83-4e09-b17d-8de8212c85c8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:918d5eead5bc5b24f3b51b9d316e46127ef1a262ebe67cc0d631f7891131cf73,PodSandboxId:eb7a2370dc1018e9fa16855c6b8f57a10c2462af097545fdb7c4eff6f897d7af,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1763772876137639630,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b4x45,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 4d6b3685-bb30-4389-b236-db63a4a31dbc,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bb29b79fe99cbbdf4fac0f45db8e7861317c13290014de70d605e77e0e42544,PodSandboxId:162700f7b54cbd06670c1d417e298cf798bd3c4c1af2dea2d797f73ebed4ff37,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763772876179957745,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1
9b2824-1993-4859-b367-9dd74d4a1a0e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf0a2e8d39af7cd16576968cb8bbd5bb49b12063b08250e82b95395bed899f07,PodSandboxId:ce084f6bf215fd8e2593b74f0f64ceec2c55ccc8992ef32775f4e0b6b0739a11,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1763772871776282953,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-193194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b06c29e03a30cf2bbe583856a0998a54,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf5795fcb7a3f02e51ab04f1a52df1d769c77b95f394871af2e6dcf1e23831c4,PodSandboxId:8399c64b3b5716c5812aadacef94e4781b49f84da4ce67b5a9adb462a6b22763,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1763772871730547701,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-193194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ad8c0347320d5b99cc8bf7
09e78b075,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23228b3b40501de06865ce9bf3315ea6d586811e65e4465d27ced85243c96e7e,PodSandboxId:667b4bb0edd20361541cb2776039259f8d82353dd5cf9c304793ad72450a2c25,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1763772871759334470,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-193194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3e5775c3af2fc5e8c236dab83d590f6,}
,Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f1b6adc607873032eec3570ee742a156166bc5ceecaacd798db72e353630daa,PodSandboxId:a2dd2a93f6c491e74a84b8363ab72a318adbf34905af0ec14add3ede4087ecbe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1763772871712994246,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-193194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d3c955054d228ff13bf79595ba554f5,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0c084f05-9ee4-4adc-93dd-64b9be711909 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                           NAMESPACE
	b30cac06b7a0b       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   11 seconds ago      Running             coredns                   1                   5aeab08a65202       coredns-668d6bf9bc-kt2h2                      kube-system
	4bb29b79fe99c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 seconds ago      Running             storage-provisioner       1                   162700f7b54cb       storage-provisioner                           kube-system
	918d5eead5bc5       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08   15 seconds ago      Running             kube-proxy                1                   eb7a2370dc101       kube-proxy-b4x45                              kube-system
	cf0a2e8d39af7       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   19 seconds ago      Running             etcd                      1                   ce084f6bf215f       etcd-test-preload-193194                      kube-system
	23228b3b40501       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5   19 seconds ago      Running             kube-scheduler            1                   667b4bb0edd20       kube-scheduler-test-preload-193194            kube-system
	cf5795fcb7a3f       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3   19 seconds ago      Running             kube-controller-manager   1                   8399c64b3b571       kube-controller-manager-test-preload-193194   kube-system
	0f1b6adc60787       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4   19 seconds ago      Running             kube-apiserver            1                   a2dd2a93f6c49       kube-apiserver-test-preload-193194            kube-system
	
	
	==> coredns [b30cac06b7a0b44abd59437dc34ccc8374805b77549ca7a10899a1f517ba1725] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:49230 - 48227 "HINFO IN 4573280249713322872.1735757817519640030. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.082243375s
	
	
	==> describe nodes <==
	Name:               test-preload-193194
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-193194
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785
	                    minikube.k8s.io/name=test-preload-193194
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_22T00_53_03_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 22 Nov 2025 00:52:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-193194
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 22 Nov 2025 00:54:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 22 Nov 2025 00:54:37 +0000   Sat, 22 Nov 2025 00:52:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 22 Nov 2025 00:54:37 +0000   Sat, 22 Nov 2025 00:52:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 22 Nov 2025 00:54:37 +0000   Sat, 22 Nov 2025 00:52:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 22 Nov 2025 00:54:37 +0000   Sat, 22 Nov 2025 00:54:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.70
	  Hostname:    test-preload-193194
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc2a322d494b44fab4670012e3756d23
	  System UUID:                dc2a322d-494b-44fa-b467-0012e3756d23
	  Boot ID:                    eeb7a641-9a9a-4ebc-b450-0c018353ef78
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-kt2h2                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     104s
	  kube-system                 etcd-test-preload-193194                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         111s
	  kube-system                 kube-apiserver-test-preload-193194             250m (12%)    0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-controller-manager-test-preload-193194    200m (10%)    0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-proxy-b4x45                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-scheduler-test-preload-193194             100m (5%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 103s               kube-proxy       
	  Normal   Starting                 14s                kube-proxy       
	  Normal   NodeHasSufficientMemory  109s               kubelet          Node test-preload-193194 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  109s               kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    109s               kubelet          Node test-preload-193194 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     109s               kubelet          Node test-preload-193194 status is now: NodeHasSufficientPID
	  Normal   Starting                 109s               kubelet          Starting kubelet.
	  Normal   NodeReady                108s               kubelet          Node test-preload-193194 status is now: NodeReady
	  Normal   RegisteredNode           106s               node-controller  Node test-preload-193194 event: Registered Node test-preload-193194 in Controller
	  Normal   Starting                 22s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  22s (x8 over 22s)  kubelet          Node test-preload-193194 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    22s (x8 over 22s)  kubelet          Node test-preload-193194 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     22s (x7 over 22s)  kubelet          Node test-preload-193194 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  22s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 16s                kubelet          Node test-preload-193194 has been rebooted, boot id: eeb7a641-9a9a-4ebc-b450-0c018353ef78
	  Normal   RegisteredNode           13s                node-controller  Node test-preload-193194 event: Registered Node test-preload-193194 in Controller
	
	
	==> dmesg <==
	[Nov22 00:54] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000006] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000068] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.000366] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.955939] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000018] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.086864] kauditd_printk_skb: 4 callbacks suppressed
	[  +0.102952] kauditd_printk_skb: 102 callbacks suppressed
	[  +6.513244] kauditd_printk_skb: 177 callbacks suppressed
	[  +0.023615] kauditd_printk_skb: 197 callbacks suppressed
	
	
	==> etcd [cf0a2e8d39af7cd16576968cb8bbd5bb49b12063b08250e82b95395bed899f07] <==
	{"level":"info","ts":"2025-11-22T00:54:32.280285Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-11-22T00:54:32.280761Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b9ca18127a3e3182","local-member-id":"d9e0442f914d2c09","added-peer-id":"d9e0442f914d2c09","added-peer-peer-urls":["https://192.168.39.70:2380"]}
	{"level":"info","ts":"2025-11-22T00:54:32.280921Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b9ca18127a3e3182","local-member-id":"d9e0442f914d2c09","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-22T00:54:32.280967Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-22T00:54:32.284524Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-22T00:54:32.284867Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"d9e0442f914d2c09","initial-advertise-peer-urls":["https://192.168.39.70:2380"],"listen-peer-urls":["https://192.168.39.70:2380"],"advertise-client-urls":["https://192.168.39.70:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.70:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-22T00:54:32.284918Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-22T00:54:32.285070Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.39.70:2380"}
	{"level":"info","ts":"2025-11-22T00:54:32.285122Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.39.70:2380"}
	{"level":"info","ts":"2025-11-22T00:54:33.953457Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9e0442f914d2c09 is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-22T00:54:33.953498Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9e0442f914d2c09 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-22T00:54:33.953542Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9e0442f914d2c09 received MsgPreVoteResp from d9e0442f914d2c09 at term 2"}
	{"level":"info","ts":"2025-11-22T00:54:33.953556Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9e0442f914d2c09 became candidate at term 3"}
	{"level":"info","ts":"2025-11-22T00:54:33.953562Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9e0442f914d2c09 received MsgVoteResp from d9e0442f914d2c09 at term 3"}
	{"level":"info","ts":"2025-11-22T00:54:33.953569Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9e0442f914d2c09 became leader at term 3"}
	{"level":"info","ts":"2025-11-22T00:54:33.953579Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d9e0442f914d2c09 elected leader d9e0442f914d2c09 at term 3"}
	{"level":"info","ts":"2025-11-22T00:54:33.955124Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"d9e0442f914d2c09","local-member-attributes":"{Name:test-preload-193194 ClientURLs:[https://192.168.39.70:2379]}","request-path":"/0/members/d9e0442f914d2c09/attributes","cluster-id":"b9ca18127a3e3182","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-22T00:54:33.955131Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-22T00:54:33.955353Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-22T00:54:33.955557Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-22T00:54:33.955592Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-22T00:54:33.956184Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-11-22T00:54:33.956194Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-11-22T00:54:33.956843Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-22T00:54:33.957123Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.70:2379"}
	
	
	==> kernel <==
	 00:54:51 up 0 min,  0 users,  load average: 1.14, 0.32, 0.11
	Linux test-preload-193194 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Nov 19 01:10:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [0f1b6adc607873032eec3570ee742a156166bc5ceecaacd798db72e353630daa] <==
	I1122 00:54:35.183483       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1122 00:54:35.183570       1 policy_source.go:240] refreshing policies
	I1122 00:54:35.184051       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1122 00:54:35.184094       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1122 00:54:35.194399       1 shared_informer.go:320] Caches are synced for configmaps
	I1122 00:54:35.194633       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1122 00:54:35.195085       1 aggregator.go:171] initial CRD sync complete...
	I1122 00:54:35.195114       1 autoregister_controller.go:144] Starting autoregister controller
	I1122 00:54:35.195120       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1122 00:54:35.195125       1 cache.go:39] Caches are synced for autoregister controller
	I1122 00:54:35.219119       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1122 00:54:35.226944       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1122 00:54:35.237134       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1122 00:54:35.237245       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1122 00:54:35.239948       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1122 00:54:35.261819       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1122 00:54:35.752556       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1122 00:54:36.038147       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1122 00:54:36.966127       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1122 00:54:37.010911       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1122 00:54:37.047245       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1122 00:54:37.055824       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1122 00:54:38.355840       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1122 00:54:38.697986       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1122 00:54:38.747289       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [cf5795fcb7a3f02e51ab04f1a52df1d769c77b95f394871af2e6dcf1e23831c4] <==
	I1122 00:54:38.364784       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="190.167µs"
	I1122 00:54:38.366348       1 shared_informer.go:320] Caches are synced for taint
	I1122 00:54:38.366519       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1122 00:54:38.366613       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="test-preload-193194"
	I1122 00:54:38.366745       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1122 00:54:38.368580       1 shared_informer.go:320] Caches are synced for resource quota
	I1122 00:54:38.375045       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I1122 00:54:38.380905       1 shared_informer.go:320] Caches are synced for GC
	I1122 00:54:38.382926       1 shared_informer.go:320] Caches are synced for resource quota
	I1122 00:54:38.389633       1 shared_informer.go:320] Caches are synced for garbage collector
	I1122 00:54:38.391928       1 shared_informer.go:320] Caches are synced for service account
	I1122 00:54:38.395268       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I1122 00:54:38.395933       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I1122 00:54:38.396177       1 shared_informer.go:320] Caches are synced for crt configmap
	I1122 00:54:38.396745       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I1122 00:54:38.398412       1 shared_informer.go:320] Caches are synced for persistent volume
	I1122 00:54:38.400123       1 shared_informer.go:320] Caches are synced for endpoint
	I1122 00:54:38.402560       1 shared_informer.go:320] Caches are synced for stateful set
	I1122 00:54:38.403749       1 shared_informer.go:320] Caches are synced for daemon sets
	I1122 00:54:38.404931       1 shared_informer.go:320] Caches are synced for ReplicationController
	I1122 00:54:38.408314       1 shared_informer.go:320] Caches are synced for attach detach
	I1122 00:54:38.408659       1 shared_informer.go:320] Caches are synced for PV protection
	I1122 00:54:40.873262       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="42.865µs"
	I1122 00:54:48.106068       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="14.9407ms"
	I1122 00:54:48.106189       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="48.918µs"
	
	
	==> kube-proxy [918d5eead5bc5b24f3b51b9d316e46127ef1a262ebe67cc0d631f7891131cf73] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1122 00:54:36.520286       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1122 00:54:36.530156       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.70"]
	E1122 00:54:36.530271       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1122 00:54:36.579373       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I1122 00:54:36.579502       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1122 00:54:36.579538       1 server_linux.go:170] "Using iptables Proxier"
	I1122 00:54:36.582877       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1122 00:54:36.583191       1 server.go:497] "Version info" version="v1.32.0"
	I1122 00:54:36.583222       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:54:36.584869       1 config.go:199] "Starting service config controller"
	I1122 00:54:36.584898       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1122 00:54:36.584932       1 config.go:105] "Starting endpoint slice config controller"
	I1122 00:54:36.584936       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1122 00:54:36.585424       1 config.go:329] "Starting node config controller"
	I1122 00:54:36.585454       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1122 00:54:36.685122       1 shared_informer.go:320] Caches are synced for service config
	I1122 00:54:36.685125       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1122 00:54:36.685483       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [23228b3b40501de06865ce9bf3315ea6d586811e65e4465d27ced85243c96e7e] <==
	I1122 00:54:32.879256       1 serving.go:386] Generated self-signed cert in-memory
	W1122 00:54:35.092223       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1122 00:54:35.092263       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1122 00:54:35.092274       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1122 00:54:35.092284       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1122 00:54:35.153487       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.0"
	I1122 00:54:35.155746       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:54:35.162150       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:54:35.162357       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1122 00:54:35.162457       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1122 00:54:35.162792       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1122 00:54:35.263543       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 22 00:54:35 test-preload-193194 kubelet[1183]: I1122 00:54:35.256114    1183 setters.go:602] "Node became not ready" node="test-preload-193194" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:54:35Z","lastTransitionTime":"2025-11-22T00:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"}
	Nov 22 00:54:35 test-preload-193194 kubelet[1183]: E1122 00:54:35.269260    1183 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-test-preload-193194\" already exists" pod="kube-system/etcd-test-preload-193194"
	Nov 22 00:54:35 test-preload-193194 kubelet[1183]: I1122 00:54:35.269401    1183 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-test-preload-193194"
	Nov 22 00:54:35 test-preload-193194 kubelet[1183]: E1122 00:54:35.297144    1183 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-test-preload-193194\" already exists" pod="kube-system/kube-apiserver-test-preload-193194"
	Nov 22 00:54:35 test-preload-193194 kubelet[1183]: I1122 00:54:35.297466    1183 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-test-preload-193194"
	Nov 22 00:54:35 test-preload-193194 kubelet[1183]: E1122 00:54:35.315494    1183 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-test-preload-193194\" already exists" pod="kube-system/kube-controller-manager-test-preload-193194"
	Nov 22 00:54:35 test-preload-193194 kubelet[1183]: I1122 00:54:35.649404    1183 apiserver.go:52] "Watching apiserver"
	Nov 22 00:54:35 test-preload-193194 kubelet[1183]: E1122 00:54:35.655614    1183 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-kt2h2" podUID="929d3254-ff83-4e09-b17d-8de8212c85c8"
	Nov 22 00:54:35 test-preload-193194 kubelet[1183]: I1122 00:54:35.672638    1183 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Nov 22 00:54:35 test-preload-193194 kubelet[1183]: I1122 00:54:35.746667    1183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4d6b3685-bb30-4389-b236-db63a4a31dbc-lib-modules\") pod \"kube-proxy-b4x45\" (UID: \"4d6b3685-bb30-4389-b236-db63a4a31dbc\") " pod="kube-system/kube-proxy-b4x45"
	Nov 22 00:54:35 test-preload-193194 kubelet[1183]: I1122 00:54:35.747057    1183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f19b2824-1993-4859-b367-9dd74d4a1a0e-tmp\") pod \"storage-provisioner\" (UID: \"f19b2824-1993-4859-b367-9dd74d4a1a0e\") " pod="kube-system/storage-provisioner"
	Nov 22 00:54:35 test-preload-193194 kubelet[1183]: E1122 00:54:35.746824    1183 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 22 00:54:35 test-preload-193194 kubelet[1183]: E1122 00:54:35.747351    1183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/929d3254-ff83-4e09-b17d-8de8212c85c8-config-volume podName:929d3254-ff83-4e09-b17d-8de8212c85c8 nodeName:}" failed. No retries permitted until 2025-11-22 00:54:36.247318297 +0000 UTC m=+6.713892758 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/929d3254-ff83-4e09-b17d-8de8212c85c8-config-volume") pod "coredns-668d6bf9bc-kt2h2" (UID: "929d3254-ff83-4e09-b17d-8de8212c85c8") : object "kube-system"/"coredns" not registered
	Nov 22 00:54:35 test-preload-193194 kubelet[1183]: I1122 00:54:35.748066    1183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4d6b3685-bb30-4389-b236-db63a4a31dbc-xtables-lock\") pod \"kube-proxy-b4x45\" (UID: \"4d6b3685-bb30-4389-b236-db63a4a31dbc\") " pod="kube-system/kube-proxy-b4x45"
	Nov 22 00:54:36 test-preload-193194 kubelet[1183]: E1122 00:54:36.251080    1183 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 22 00:54:36 test-preload-193194 kubelet[1183]: E1122 00:54:36.251854    1183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/929d3254-ff83-4e09-b17d-8de8212c85c8-config-volume podName:929d3254-ff83-4e09-b17d-8de8212c85c8 nodeName:}" failed. No retries permitted until 2025-11-22 00:54:37.251833502 +0000 UTC m=+7.718407967 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/929d3254-ff83-4e09-b17d-8de8212c85c8-config-volume") pod "coredns-668d6bf9bc-kt2h2" (UID: "929d3254-ff83-4e09-b17d-8de8212c85c8") : object "kube-system"/"coredns" not registered
	Nov 22 00:54:37 test-preload-193194 kubelet[1183]: E1122 00:54:37.259779    1183 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 22 00:54:37 test-preload-193194 kubelet[1183]: E1122 00:54:37.259838    1183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/929d3254-ff83-4e09-b17d-8de8212c85c8-config-volume podName:929d3254-ff83-4e09-b17d-8de8212c85c8 nodeName:}" failed. No retries permitted until 2025-11-22 00:54:39.259823035 +0000 UTC m=+9.726397497 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/929d3254-ff83-4e09-b17d-8de8212c85c8-config-volume") pod "coredns-668d6bf9bc-kt2h2" (UID: "929d3254-ff83-4e09-b17d-8de8212c85c8") : object "kube-system"/"coredns" not registered
	Nov 22 00:54:37 test-preload-193194 kubelet[1183]: I1122 00:54:37.398012    1183 kubelet_node_status.go:502] "Fast updating node status as it just became ready"
	Nov 22 00:54:39 test-preload-193194 kubelet[1183]: E1122 00:54:39.748966    1183 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763772879746587957,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 22 00:54:39 test-preload-193194 kubelet[1183]: E1122 00:54:39.749306    1183 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763772879746587957,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 22 00:54:41 test-preload-193194 kubelet[1183]: I1122 00:54:41.859594    1183 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 22 00:54:48 test-preload-193194 kubelet[1183]: I1122 00:54:48.075207    1183 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 22 00:54:49 test-preload-193194 kubelet[1183]: E1122 00:54:49.752342    1183 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763772889752029989,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 22 00:54:49 test-preload-193194 kubelet[1183]: E1122 00:54:49.752474    1183 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763772889752029989,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [4bb29b79fe99cbbdf4fac0f45db8e7861317c13290014de70d605e77e0e42544] <==
	I1122 00:54:36.382502       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-193194 -n test-preload-193194
helpers_test.go:269: (dbg) Run:  kubectl --context test-preload-193194 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-193194" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-193194
--- FAIL: TestPreload (161.62s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (95.17s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-061914 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-061914 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m30.286154022s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-061914] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21934
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21934-244751/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-244751/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-061914" primary control-plane node in "pause-061914" cluster
	* Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-061914" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1122 01:01:10.659688  286531 out.go:360] Setting OutFile to fd 1 ...
	I1122 01:01:10.659989  286531 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 01:01:10.660000  286531 out.go:374] Setting ErrFile to fd 2...
	I1122 01:01:10.660004  286531 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 01:01:10.660232  286531 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-244751/.minikube/bin
	I1122 01:01:10.660739  286531 out.go:368] Setting JSON to false
	I1122 01:01:10.661634  286531 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":31399,"bootTime":1763741872,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1122 01:01:10.661731  286531 start.go:143] virtualization: kvm guest
	I1122 01:01:10.663599  286531 out.go:179] * [pause-061914] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1122 01:01:10.664788  286531 notify.go:221] Checking for updates...
	I1122 01:01:10.664817  286531 out.go:179]   - MINIKUBE_LOCATION=21934
	I1122 01:01:10.666010  286531 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 01:01:10.667903  286531 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-244751/kubeconfig
	I1122 01:01:10.669233  286531 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-244751/.minikube
	I1122 01:01:10.670627  286531 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1122 01:01:10.672135  286531 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 01:01:10.673787  286531 config.go:182] Loaded profile config "pause-061914": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 01:01:10.674315  286531 driver.go:422] Setting default libvirt URI to qemu:///system
	I1122 01:01:10.716281  286531 out.go:179] * Using the kvm2 driver based on existing profile
	I1122 01:01:10.718006  286531 start.go:309] selected driver: kvm2
	I1122 01:01:10.718031  286531 start.go:930] validating driver "kvm2" against &{Name:pause-061914 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.34.1 ClusterName:pause-061914 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.109 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-insta
ller:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 01:01:10.718235  286531 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 01:01:10.719784  286531 cni.go:84] Creating CNI manager for ""
	I1122 01:01:10.719888  286531 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1122 01:01:10.719981  286531 start.go:353] cluster config:
	{Name:pause-061914 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-061914 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.109 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false
portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 01:01:10.720165  286531 iso.go:125] acquiring lock: {Name:mkc83d3435f1eaa5a92358fc78f85b7d74048deb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 01:01:10.722691  286531 out.go:179] * Starting "pause-061914" primary control-plane node in "pause-061914" cluster
	I1122 01:01:10.724053  286531 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 01:01:10.724115  286531 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21934-244751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1122 01:01:10.724132  286531 cache.go:65] Caching tarball of preloaded images
	I1122 01:01:10.724218  286531 preload.go:238] Found /home/jenkins/minikube-integration/21934-244751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1122 01:01:10.724231  286531 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1122 01:01:10.724353  286531 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/pause-061914/config.json ...
	I1122 01:01:10.724638  286531 start.go:360] acquireMachinesLock for pause-061914: {Name:mk0193f6f5636a08cc7939c91649c5870e7698fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1122 01:01:52.777241  286531 start.go:364] duration metric: took 42.052532901s to acquireMachinesLock for "pause-061914"
	I1122 01:01:52.777298  286531 start.go:96] Skipping create...Using existing machine configuration
	I1122 01:01:52.777311  286531 fix.go:54] fixHost starting: 
	I1122 01:01:52.780115  286531 fix.go:112] recreateIfNeeded on pause-061914: state=Running err=<nil>
	W1122 01:01:52.780164  286531 fix.go:138] unexpected machine state, will restart: <nil>
	I1122 01:01:52.781922  286531 out.go:252] * Updating the running kvm2 "pause-061914" VM ...
	I1122 01:01:52.781971  286531 machine.go:94] provisionDockerMachine start ...
	I1122 01:01:52.786583  286531 main.go:143] libmachine: domain pause-061914 has defined MAC address 52:54:00:75:6f:57 in network mk-pause-061914
	I1122 01:01:52.787148  286531 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:75:6f:57", ip: ""} in network mk-pause-061914: {Iface:virbr2 ExpiryTime:2025-11-22 02:00:27 +0000 UTC Type:0 Mac:52:54:00:75:6f:57 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:pause-061914 Clientid:01:52:54:00:75:6f:57}
	I1122 01:01:52.787177  286531 main.go:143] libmachine: domain pause-061914 has defined IP address 192.168.50.109 and MAC address 52:54:00:75:6f:57 in network mk-pause-061914
	I1122 01:01:52.787502  286531 main.go:143] libmachine: Using SSH client type: native
	I1122 01:01:52.787920  286531 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.50.109 22 <nil> <nil>}
	I1122 01:01:52.787937  286531 main.go:143] libmachine: About to run SSH command:
	hostname
	I1122 01:01:52.912193  286531 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-061914
	
	I1122 01:01:52.912238  286531 buildroot.go:166] provisioning hostname "pause-061914"
	I1122 01:01:52.917366  286531 main.go:143] libmachine: domain pause-061914 has defined MAC address 52:54:00:75:6f:57 in network mk-pause-061914
	I1122 01:01:52.920704  286531 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:75:6f:57", ip: ""} in network mk-pause-061914: {Iface:virbr2 ExpiryTime:2025-11-22 02:00:27 +0000 UTC Type:0 Mac:52:54:00:75:6f:57 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:pause-061914 Clientid:01:52:54:00:75:6f:57}
	I1122 01:01:52.920771  286531 main.go:143] libmachine: domain pause-061914 has defined IP address 192.168.50.109 and MAC address 52:54:00:75:6f:57 in network mk-pause-061914
	I1122 01:01:52.921137  286531 main.go:143] libmachine: Using SSH client type: native
	I1122 01:01:52.921471  286531 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.50.109 22 <nil> <nil>}
	I1122 01:01:52.921496  286531 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-061914 && echo "pause-061914" | sudo tee /etc/hostname
	I1122 01:01:53.062289  286531 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-061914
	
	I1122 01:01:53.066404  286531 main.go:143] libmachine: domain pause-061914 has defined MAC address 52:54:00:75:6f:57 in network mk-pause-061914
	I1122 01:01:53.066952  286531 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:75:6f:57", ip: ""} in network mk-pause-061914: {Iface:virbr2 ExpiryTime:2025-11-22 02:00:27 +0000 UTC Type:0 Mac:52:54:00:75:6f:57 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:pause-061914 Clientid:01:52:54:00:75:6f:57}
	I1122 01:01:53.066986  286531 main.go:143] libmachine: domain pause-061914 has defined IP address 192.168.50.109 and MAC address 52:54:00:75:6f:57 in network mk-pause-061914
	I1122 01:01:53.067208  286531 main.go:143] libmachine: Using SSH client type: native
	I1122 01:01:53.067465  286531 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.50.109 22 <nil> <nil>}
	I1122 01:01:53.067485  286531 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-061914' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-061914/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-061914' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1122 01:01:53.191962  286531 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1122 01:01:53.192001  286531 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21934-244751/.minikube CaCertPath:/home/jenkins/minikube-integration/21934-244751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21934-244751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21934-244751/.minikube}
	I1122 01:01:53.192060  286531 buildroot.go:174] setting up certificates
	I1122 01:01:53.192082  286531 provision.go:84] configureAuth start
	I1122 01:01:53.195571  286531 main.go:143] libmachine: domain pause-061914 has defined MAC address 52:54:00:75:6f:57 in network mk-pause-061914
	I1122 01:01:53.196240  286531 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:75:6f:57", ip: ""} in network mk-pause-061914: {Iface:virbr2 ExpiryTime:2025-11-22 02:00:27 +0000 UTC Type:0 Mac:52:54:00:75:6f:57 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:pause-061914 Clientid:01:52:54:00:75:6f:57}
	I1122 01:01:53.196281  286531 main.go:143] libmachine: domain pause-061914 has defined IP address 192.168.50.109 and MAC address 52:54:00:75:6f:57 in network mk-pause-061914
	I1122 01:01:53.199077  286531 main.go:143] libmachine: domain pause-061914 has defined MAC address 52:54:00:75:6f:57 in network mk-pause-061914
	I1122 01:01:53.199628  286531 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:75:6f:57", ip: ""} in network mk-pause-061914: {Iface:virbr2 ExpiryTime:2025-11-22 02:00:27 +0000 UTC Type:0 Mac:52:54:00:75:6f:57 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:pause-061914 Clientid:01:52:54:00:75:6f:57}
	I1122 01:01:53.199762  286531 main.go:143] libmachine: domain pause-061914 has defined IP address 192.168.50.109 and MAC address 52:54:00:75:6f:57 in network mk-pause-061914
	I1122 01:01:53.199983  286531 provision.go:143] copyHostCerts
	I1122 01:01:53.200044  286531 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-244751/.minikube/ca.pem, removing ...
	I1122 01:01:53.200070  286531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-244751/.minikube/ca.pem
	I1122 01:01:53.200159  286531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-244751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21934-244751/.minikube/ca.pem (1078 bytes)
	I1122 01:01:53.200303  286531 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-244751/.minikube/cert.pem, removing ...
	I1122 01:01:53.200317  286531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-244751/.minikube/cert.pem
	I1122 01:01:53.200353  286531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-244751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21934-244751/.minikube/cert.pem (1123 bytes)
	I1122 01:01:53.200448  286531 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-244751/.minikube/key.pem, removing ...
	I1122 01:01:53.200462  286531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-244751/.minikube/key.pem
	I1122 01:01:53.200498  286531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-244751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21934-244751/.minikube/key.pem (1679 bytes)
	I1122 01:01:53.200582  286531 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21934-244751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21934-244751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21934-244751/.minikube/certs/ca-key.pem org=jenkins.pause-061914 san=[127.0.0.1 192.168.50.109 localhost minikube pause-061914]
	I1122 01:01:53.234942  286531 provision.go:177] copyRemoteCerts
	I1122 01:01:53.235020  286531 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1122 01:01:53.237959  286531 main.go:143] libmachine: domain pause-061914 has defined MAC address 52:54:00:75:6f:57 in network mk-pause-061914
	I1122 01:01:53.238440  286531 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:75:6f:57", ip: ""} in network mk-pause-061914: {Iface:virbr2 ExpiryTime:2025-11-22 02:00:27 +0000 UTC Type:0 Mac:52:54:00:75:6f:57 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:pause-061914 Clientid:01:52:54:00:75:6f:57}
	I1122 01:01:53.238481  286531 main.go:143] libmachine: domain pause-061914 has defined IP address 192.168.50.109 and MAC address 52:54:00:75:6f:57 in network mk-pause-061914
	I1122 01:01:53.238671  286531 sshutil.go:53] new ssh client: &{IP:192.168.50.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/pause-061914/id_rsa Username:docker}
	I1122 01:01:53.333085  286531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1122 01:01:53.371797  286531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1122 01:01:53.409349  286531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1122 01:01:53.454653  286531 provision.go:87] duration metric: took 262.548258ms to configureAuth
	I1122 01:01:53.454713  286531 buildroot.go:189] setting minikube options for container-runtime
	I1122 01:01:53.455009  286531 config.go:182] Loaded profile config "pause-061914": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 01:01:53.459254  286531 main.go:143] libmachine: domain pause-061914 has defined MAC address 52:54:00:75:6f:57 in network mk-pause-061914
	I1122 01:01:53.459959  286531 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:75:6f:57", ip: ""} in network mk-pause-061914: {Iface:virbr2 ExpiryTime:2025-11-22 02:00:27 +0000 UTC Type:0 Mac:52:54:00:75:6f:57 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:pause-061914 Clientid:01:52:54:00:75:6f:57}
	I1122 01:01:53.459994  286531 main.go:143] libmachine: domain pause-061914 has defined IP address 192.168.50.109 and MAC address 52:54:00:75:6f:57 in network mk-pause-061914
	I1122 01:01:53.460246  286531 main.go:143] libmachine: Using SSH client type: native
	I1122 01:01:53.460443  286531 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.50.109 22 <nil> <nil>}
	I1122 01:01:53.460456  286531 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1122 01:01:59.055775  286531 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1122 01:01:59.055803  286531 machine.go:97] duration metric: took 6.273820847s to provisionDockerMachine
	I1122 01:01:59.055816  286531 start.go:293] postStartSetup for "pause-061914" (driver="kvm2")
	I1122 01:01:59.055827  286531 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1122 01:01:59.055899  286531 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1122 01:01:59.058868  286531 main.go:143] libmachine: domain pause-061914 has defined MAC address 52:54:00:75:6f:57 in network mk-pause-061914
	I1122 01:01:59.059309  286531 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:75:6f:57", ip: ""} in network mk-pause-061914: {Iface:virbr2 ExpiryTime:2025-11-22 02:00:27 +0000 UTC Type:0 Mac:52:54:00:75:6f:57 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:pause-061914 Clientid:01:52:54:00:75:6f:57}
	I1122 01:01:59.059331  286531 main.go:143] libmachine: domain pause-061914 has defined IP address 192.168.50.109 and MAC address 52:54:00:75:6f:57 in network mk-pause-061914
	I1122 01:01:59.059479  286531 sshutil.go:53] new ssh client: &{IP:192.168.50.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/pause-061914/id_rsa Username:docker}
	I1122 01:01:59.144307  286531 ssh_runner.go:195] Run: cat /etc/os-release
	I1122 01:01:59.149833  286531 info.go:137] Remote host: Buildroot 2025.02
	I1122 01:01:59.149863  286531 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-244751/.minikube/addons for local assets ...
	I1122 01:01:59.149931  286531 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-244751/.minikube/files for local assets ...
	I1122 01:01:59.150022  286531 filesync.go:149] local asset: /home/jenkins/minikube-integration/21934-244751/.minikube/files/etc/ssl/certs/2506642.pem -> 2506642.pem in /etc/ssl/certs
	I1122 01:01:59.150114  286531 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1122 01:01:59.162966  286531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/files/etc/ssl/certs/2506642.pem --> /etc/ssl/certs/2506642.pem (1708 bytes)
	I1122 01:01:59.197890  286531 start.go:296] duration metric: took 142.054363ms for postStartSetup
	I1122 01:01:59.197939  286531 fix.go:56] duration metric: took 6.420633666s for fixHost
	I1122 01:01:59.201542  286531 main.go:143] libmachine: domain pause-061914 has defined MAC address 52:54:00:75:6f:57 in network mk-pause-061914
	I1122 01:01:59.202036  286531 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:75:6f:57", ip: ""} in network mk-pause-061914: {Iface:virbr2 ExpiryTime:2025-11-22 02:00:27 +0000 UTC Type:0 Mac:52:54:00:75:6f:57 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:pause-061914 Clientid:01:52:54:00:75:6f:57}
	I1122 01:01:59.202065  286531 main.go:143] libmachine: domain pause-061914 has defined IP address 192.168.50.109 and MAC address 52:54:00:75:6f:57 in network mk-pause-061914
	I1122 01:01:59.202287  286531 main.go:143] libmachine: Using SSH client type: native
	I1122 01:01:59.202539  286531 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.50.109 22 <nil> <nil>}
	I1122 01:01:59.202554  286531 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1122 01:01:59.307794  286531 main.go:143] libmachine: SSH cmd err, output: <nil>: 1763773319.301377360
	
	I1122 01:01:59.307820  286531 fix.go:216] guest clock: 1763773319.301377360
	I1122 01:01:59.307829  286531 fix.go:229] Guest: 2025-11-22 01:01:59.30137736 +0000 UTC Remote: 2025-11-22 01:01:59.197944177 +0000 UTC m=+48.595022327 (delta=103.433183ms)
	I1122 01:01:59.307846  286531 fix.go:200] guest clock delta is within tolerance: 103.433183ms
	I1122 01:01:59.307851  286531 start.go:83] releasing machines lock for "pause-061914", held for 6.530577312s
	I1122 01:01:59.311365  286531 main.go:143] libmachine: domain pause-061914 has defined MAC address 52:54:00:75:6f:57 in network mk-pause-061914
	I1122 01:01:59.311883  286531 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:75:6f:57", ip: ""} in network mk-pause-061914: {Iface:virbr2 ExpiryTime:2025-11-22 02:00:27 +0000 UTC Type:0 Mac:52:54:00:75:6f:57 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:pause-061914 Clientid:01:52:54:00:75:6f:57}
	I1122 01:01:59.311917  286531 main.go:143] libmachine: domain pause-061914 has defined IP address 192.168.50.109 and MAC address 52:54:00:75:6f:57 in network mk-pause-061914
	I1122 01:01:59.312497  286531 ssh_runner.go:195] Run: cat /version.json
	I1122 01:01:59.312552  286531 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1122 01:01:59.316222  286531 main.go:143] libmachine: domain pause-061914 has defined MAC address 52:54:00:75:6f:57 in network mk-pause-061914
	I1122 01:01:59.316248  286531 main.go:143] libmachine: domain pause-061914 has defined MAC address 52:54:00:75:6f:57 in network mk-pause-061914
	I1122 01:01:59.316715  286531 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:75:6f:57", ip: ""} in network mk-pause-061914: {Iface:virbr2 ExpiryTime:2025-11-22 02:00:27 +0000 UTC Type:0 Mac:52:54:00:75:6f:57 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:pause-061914 Clientid:01:52:54:00:75:6f:57}
	I1122 01:01:59.316738  286531 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:75:6f:57", ip: ""} in network mk-pause-061914: {Iface:virbr2 ExpiryTime:2025-11-22 02:00:27 +0000 UTC Type:0 Mac:52:54:00:75:6f:57 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:pause-061914 Clientid:01:52:54:00:75:6f:57}
	I1122 01:01:59.316754  286531 main.go:143] libmachine: domain pause-061914 has defined IP address 192.168.50.109 and MAC address 52:54:00:75:6f:57 in network mk-pause-061914
	I1122 01:01:59.316760  286531 main.go:143] libmachine: domain pause-061914 has defined IP address 192.168.50.109 and MAC address 52:54:00:75:6f:57 in network mk-pause-061914
	I1122 01:01:59.316972  286531 sshutil.go:53] new ssh client: &{IP:192.168.50.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/pause-061914/id_rsa Username:docker}
	I1122 01:01:59.317064  286531 sshutil.go:53] new ssh client: &{IP:192.168.50.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/pause-061914/id_rsa Username:docker}
	I1122 01:01:59.401045  286531 ssh_runner.go:195] Run: systemctl --version
	I1122 01:01:59.430612  286531 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1122 01:01:59.588751  286531 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1122 01:01:59.596819  286531 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1122 01:01:59.596897  286531 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1122 01:01:59.609613  286531 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1122 01:01:59.609648  286531 start.go:496] detecting cgroup driver to use...
	I1122 01:01:59.609742  286531 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1122 01:01:59.632082  286531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1122 01:01:59.652774  286531 docker.go:218] disabling cri-docker service (if available) ...
	I1122 01:01:59.652834  286531 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1122 01:01:59.676112  286531 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1122 01:01:59.694497  286531 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1122 01:01:59.898069  286531 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1122 01:02:00.086041  286531 docker.go:234] disabling docker service ...
	I1122 01:02:00.086152  286531 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1122 01:02:00.118770  286531 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1122 01:02:00.136289  286531 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1122 01:02:00.323541  286531 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1122 01:02:00.512930  286531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1122 01:02:00.530532  286531 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1122 01:02:00.557904  286531 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1122 01:02:00.558019  286531 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 01:02:00.576318  286531 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1122 01:02:00.576392  286531 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 01:02:00.593342  286531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 01:02:00.612331  286531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 01:02:00.628381  286531 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1122 01:02:00.642989  286531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 01:02:00.661289  286531 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 01:02:00.676957  286531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 01:02:00.690604  286531 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1122 01:02:00.702947  286531 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1122 01:02:00.715351  286531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 01:02:00.903201  286531 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1122 01:02:01.933297  286531 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.030048412s)
	I1122 01:02:01.933331  286531 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1122 01:02:01.933392  286531 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1122 01:02:01.939789  286531 start.go:564] Will wait 60s for crictl version
	I1122 01:02:01.939864  286531 ssh_runner.go:195] Run: which crictl
	I1122 01:02:01.945127  286531 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1122 01:02:01.984303  286531 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1122 01:02:01.984422  286531 ssh_runner.go:195] Run: crio --version
	I1122 01:02:02.021312  286531 ssh_runner.go:195] Run: crio --version
	I1122 01:02:02.059721  286531 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1122 01:02:02.064554  286531 main.go:143] libmachine: domain pause-061914 has defined MAC address 52:54:00:75:6f:57 in network mk-pause-061914
	I1122 01:02:02.065115  286531 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:75:6f:57", ip: ""} in network mk-pause-061914: {Iface:virbr2 ExpiryTime:2025-11-22 02:00:27 +0000 UTC Type:0 Mac:52:54:00:75:6f:57 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:pause-061914 Clientid:01:52:54:00:75:6f:57}
	I1122 01:02:02.065168  286531 main.go:143] libmachine: domain pause-061914 has defined IP address 192.168.50.109 and MAC address 52:54:00:75:6f:57 in network mk-pause-061914
	I1122 01:02:02.065441  286531 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1122 01:02:02.070962  286531 kubeadm.go:884] updating cluster {Name:pause-061914 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1
ClusterName:pause-061914 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.109 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvid
ia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1122 01:02:02.071150  286531 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 01:02:02.071217  286531 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 01:02:02.118013  286531 crio.go:514] all images are preloaded for cri-o runtime.
	I1122 01:02:02.118039  286531 crio.go:433] Images already preloaded, skipping extraction
	I1122 01:02:02.118104  286531 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 01:02:02.153893  286531 crio.go:514] all images are preloaded for cri-o runtime.
	I1122 01:02:02.153920  286531 cache_images.go:86] Images are preloaded, skipping loading
	I1122 01:02:02.153929  286531 kubeadm.go:935] updating node { 192.168.50.109 8443 v1.34.1 crio true true} ...
	I1122 01:02:02.154050  286531 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-061914 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.109
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-061914 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1122 01:02:02.154143  286531 ssh_runner.go:195] Run: crio config
	I1122 01:02:02.217383  286531 cni.go:84] Creating CNI manager for ""
	I1122 01:02:02.217410  286531 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1122 01:02:02.217434  286531 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1122 01:02:02.217461  286531 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.109 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-061914 NodeName:pause-061914 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.109"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.109 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1122 01:02:02.217639  286531 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.109
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-061914"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.109"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.109"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1122 01:02:02.217747  286531 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1122 01:02:02.231942  286531 binaries.go:51] Found k8s binaries, skipping transfer
	I1122 01:02:02.232019  286531 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1122 01:02:02.247847  286531 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1122 01:02:02.275909  286531 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1122 01:02:02.308234  286531 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1122 01:02:02.335424  286531 ssh_runner.go:195] Run: grep 192.168.50.109	control-plane.minikube.internal$ /etc/hosts
	I1122 01:02:02.340704  286531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 01:02:02.528058  286531 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 01:02:02.547962  286531 certs.go:69] Setting up /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/pause-061914 for IP: 192.168.50.109
	I1122 01:02:02.547992  286531 certs.go:195] generating shared ca certs ...
	I1122 01:02:02.548015  286531 certs.go:227] acquiring lock for ca certs: {Name:mk43fa762c6315605485300e07cf83f8f357f8dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 01:02:02.548215  286531 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21934-244751/.minikube/ca.key
	I1122 01:02:02.548267  286531 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21934-244751/.minikube/proxy-client-ca.key
	I1122 01:02:02.548284  286531 certs.go:257] generating profile certs ...
	I1122 01:02:02.548366  286531 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/pause-061914/client.key
	I1122 01:02:02.548436  286531 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/pause-061914/apiserver.key.872d2023
	I1122 01:02:02.548495  286531 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/pause-061914/proxy-client.key
	I1122 01:02:02.548628  286531 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-244751/.minikube/certs/250664.pem (1338 bytes)
	W1122 01:02:02.548665  286531 certs.go:480] ignoring /home/jenkins/minikube-integration/21934-244751/.minikube/certs/250664_empty.pem, impossibly tiny 0 bytes
	I1122 01:02:02.548690  286531 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-244751/.minikube/certs/ca-key.pem (1675 bytes)
	I1122 01:02:02.548718  286531 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-244751/.minikube/certs/ca.pem (1078 bytes)
	I1122 01:02:02.548744  286531 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-244751/.minikube/certs/cert.pem (1123 bytes)
	I1122 01:02:02.548767  286531 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-244751/.minikube/certs/key.pem (1679 bytes)
	I1122 01:02:02.548806  286531 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-244751/.minikube/files/etc/ssl/certs/2506642.pem (1708 bytes)
	I1122 01:02:02.549488  286531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1122 01:02:02.583257  286531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1122 01:02:02.620647  286531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1122 01:02:02.655221  286531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1122 01:02:02.690536  286531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/pause-061914/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1122 01:02:02.823705  286531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/pause-061914/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1122 01:02:02.881758  286531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/pause-061914/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1122 01:02:03.005387  286531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/pause-061914/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1122 01:02:03.081834  286531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/certs/250664.pem --> /usr/share/ca-certificates/250664.pem (1338 bytes)
	I1122 01:02:03.191760  286531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/files/etc/ssl/certs/2506642.pem --> /usr/share/ca-certificates/2506642.pem (1708 bytes)
	I1122 01:02:03.320448  286531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1122 01:02:03.368160  286531 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1122 01:02:03.420002  286531 ssh_runner.go:195] Run: openssl version
	I1122 01:02:03.435573  286531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/250664.pem && ln -fs /usr/share/ca-certificates/250664.pem /etc/ssl/certs/250664.pem"
	I1122 01:02:03.468631  286531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/250664.pem
	I1122 01:02:03.479688  286531 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 23:59 /usr/share/ca-certificates/250664.pem
	I1122 01:02:03.479761  286531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/250664.pem
	I1122 01:02:03.504518  286531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/250664.pem /etc/ssl/certs/51391683.0"
	I1122 01:02:03.537348  286531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2506642.pem && ln -fs /usr/share/ca-certificates/2506642.pem /etc/ssl/certs/2506642.pem"
	I1122 01:02:03.577339  286531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2506642.pem
	I1122 01:02:03.595963  286531 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 23:59 /usr/share/ca-certificates/2506642.pem
	I1122 01:02:03.596061  286531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2506642.pem
	I1122 01:02:03.621538  286531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2506642.pem /etc/ssl/certs/3ec20f2e.0"
	I1122 01:02:03.647189  286531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1122 01:02:03.676624  286531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1122 01:02:03.690442  286531 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 23:47 /usr/share/ca-certificates/minikubeCA.pem
	I1122 01:02:03.690541  286531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1122 01:02:03.705947  286531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1122 01:02:03.802498  286531 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1122 01:02:03.819857  286531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1122 01:02:03.845575  286531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1122 01:02:03.864429  286531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1122 01:02:03.885738  286531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1122 01:02:03.902409  286531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1122 01:02:03.918123  286531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1122 01:02:03.942047  286531 kubeadm.go:401] StartCluster: {Name:pause-061914 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Cl
usterName:pause-061914 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.109 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-
gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 01:02:03.942202  286531 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1122 01:02:03.942298  286531 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1122 01:02:04.051885  286531 cri.go:89] found id: "161be095ba10d1572ff29bf3c719de6b58e32850e4287c96579d96e406e85caf"
	I1122 01:02:04.051917  286531 cri.go:89] found id: "a64d5f577f4a5692d8ea033bff031ad1e3f4af70745d1a49998e0c78a5c8d60e"
	I1122 01:02:04.051924  286531 cri.go:89] found id: "8781303c59c3d8f47b1c02fafb2fd09f80e5a65d57020fffa3037fea8ee2336d"
	I1122 01:02:04.051929  286531 cri.go:89] found id: "2b6f8a854e8133f151047122c37b7742c38c6d4c8964523664d83b89840d9ebc"
	I1122 01:02:04.051934  286531 cri.go:89] found id: "5e45f9f3a0e7e09a7dfda889ebe3adc52b152c6320cab9b7e828a9b307f3c45d"
	I1122 01:02:04.051939  286531 cri.go:89] found id: "944822276ab7ec36d828a05439a1fc8a89fa62323892f288fcfb237a9faaad90"
	I1122 01:02:04.051943  286531 cri.go:89] found id: "630da37e2d6572c7f8b7e6960ed3a9cdea7904a6b2a15dcfe577d09a2a276a1f"
	I1122 01:02:04.051946  286531 cri.go:89] found id: "3394b89f460291950559050d76177d9e67c8e1c83848e7e35826b51d65359566"
	I1122 01:02:04.051951  286531 cri.go:89] found id: "8aebcda395085a6e4ac25cf6620bd7f12cf1fedd4f1a58f8957c5ddc9cef47a0"
	I1122 01:02:04.051962  286531 cri.go:89] found id: "5b6ac30317cf466182d6a0241294a41524c6707a11669d34ad4b42285bda37bb"
	I1122 01:02:04.051968  286531 cri.go:89] found id: ""
	I1122 01:02:04.052033  286531 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-061914 -n pause-061914
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-061914 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-061914 logs -n 25: (1.816873089s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                          ARGS                                                                                                           │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p running-upgrade-702170 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                                                  │ running-upgrade-702170    │ jenkins │ v1.37.0 │ 22 Nov 25 00:59 UTC │ 22 Nov 25 01:00 UTC │
	│ start   │ -p kubernetes-upgrade-450435 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio                                                                                                         │ kubernetes-upgrade-450435 │ jenkins │ v1.37.0 │ 22 Nov 25 00:59 UTC │                     │
	│ start   │ -p kubernetes-upgrade-450435 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                  │ kubernetes-upgrade-450435 │ jenkins │ v1.37.0 │ 22 Nov 25 00:59 UTC │ 22 Nov 25 01:00 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile stopped-upgrade-504824 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker                                                             │ stopped-upgrade-504824    │ jenkins │ v1.37.0 │ 22 Nov 25 00:59 UTC │                     │
	│ delete  │ -p stopped-upgrade-504824                                                                                                                                                                                               │ stopped-upgrade-504824    │ jenkins │ v1.37.0 │ 22 Nov 25 00:59 UTC │ 22 Nov 25 00:59 UTC │
	│ start   │ -p pause-061914 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio                                                                                                                 │ pause-061914              │ jenkins │ v1.37.0 │ 22 Nov 25 00:59 UTC │ 22 Nov 25 01:01 UTC │
	│ ssh     │ -p NoKubernetes-061445 sudo systemctl is-active --quiet service kubelet                                                                                                                                                 │ NoKubernetes-061445       │ jenkins │ v1.37.0 │ 22 Nov 25 00:59 UTC │                     │
	│ stop    │ -p NoKubernetes-061445                                                                                                                                                                                                  │ NoKubernetes-061445       │ jenkins │ v1.37.0 │ 22 Nov 25 00:59 UTC │ 22 Nov 25 01:00 UTC │
	│ start   │ -p NoKubernetes-061445 --driver=kvm2  --container-runtime=crio                                                                                                                                                          │ NoKubernetes-061445       │ jenkins │ v1.37.0 │ 22 Nov 25 01:00 UTC │ 22 Nov 25 01:00 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile running-upgrade-702170 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker                                                             │ running-upgrade-702170    │ jenkins │ v1.37.0 │ 22 Nov 25 01:00 UTC │                     │
	│ delete  │ -p running-upgrade-702170                                                                                                                                                                                               │ running-upgrade-702170    │ jenkins │ v1.37.0 │ 22 Nov 25 01:00 UTC │ 22 Nov 25 01:00 UTC │
	│ start   │ -p cert-expiration-302431 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio                                                                                                                    │ cert-expiration-302431    │ jenkins │ v1.37.0 │ 22 Nov 25 01:00 UTC │ 22 Nov 25 01:01 UTC │
	│ delete  │ -p kubernetes-upgrade-450435                                                                                                                                                                                            │ kubernetes-upgrade-450435 │ jenkins │ v1.37.0 │ 22 Nov 25 01:00 UTC │ 22 Nov 25 01:00 UTC │
	│ start   │ -p force-systemd-flag-555638 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio                                                                                               │ force-systemd-flag-555638 │ jenkins │ v1.37.0 │ 22 Nov 25 01:00 UTC │ 22 Nov 25 01:01 UTC │
	│ ssh     │ -p NoKubernetes-061445 sudo systemctl is-active --quiet service kubelet                                                                                                                                                 │ NoKubernetes-061445       │ jenkins │ v1.37.0 │ 22 Nov 25 01:00 UTC │                     │
	│ delete  │ -p NoKubernetes-061445                                                                                                                                                                                                  │ NoKubernetes-061445       │ jenkins │ v1.37.0 │ 22 Nov 25 01:00 UTC │ 22 Nov 25 01:00 UTC │
	│ start   │ -p guest-688997 --no-kubernetes --driver=kvm2  --container-runtime=crio                                                                                                                                                 │ guest-688997              │ jenkins │ v1.37.0 │ 22 Nov 25 01:00 UTC │ 22 Nov 25 01:01 UTC │
	│ start   │ -p pause-061914 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                                                                          │ pause-061914              │ jenkins │ v1.37.0 │ 22 Nov 25 01:01 UTC │ 22 Nov 25 01:02 UTC │
	│ ssh     │ force-systemd-flag-555638 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                    │ force-systemd-flag-555638 │ jenkins │ v1.37.0 │ 22 Nov 25 01:01 UTC │ 22 Nov 25 01:01 UTC │
	│ delete  │ -p force-systemd-flag-555638                                                                                                                                                                                            │ force-systemd-flag-555638 │ jenkins │ v1.37.0 │ 22 Nov 25 01:01 UTC │ 22 Nov 25 01:01 UTC │
	│ start   │ -p cert-options-078413 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio │ cert-options-078413       │ jenkins │ v1.37.0 │ 22 Nov 25 01:01 UTC │ 22 Nov 25 01:02 UTC │
	│ start   │ -p auto-842088 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio                                                                                                   │ auto-842088               │ jenkins │ v1.37.0 │ 22 Nov 25 01:01 UTC │                     │
	│ ssh     │ cert-options-078413 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                             │ cert-options-078413       │ jenkins │ v1.37.0 │ 22 Nov 25 01:02 UTC │ 22 Nov 25 01:02 UTC │
	│ ssh     │ -p cert-options-078413 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                           │ cert-options-078413       │ jenkins │ v1.37.0 │ 22 Nov 25 01:02 UTC │ 22 Nov 25 01:02 UTC │
	│ delete  │ -p cert-options-078413                                                                                                                                                                                                  │ cert-options-078413       │ jenkins │ v1.37.0 │ 22 Nov 25 01:02 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/22 01:01:54
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1122 01:01:54.640622  287016 out.go:360] Setting OutFile to fd 1 ...
	I1122 01:01:54.640903  287016 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 01:01:54.640913  287016 out.go:374] Setting ErrFile to fd 2...
	I1122 01:01:54.640917  287016 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 01:01:54.641125  287016 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-244751/.minikube/bin
	I1122 01:01:54.641611  287016 out.go:368] Setting JSON to false
	I1122 01:01:54.642465  287016 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":31443,"bootTime":1763741872,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1122 01:01:54.642536  287016 start.go:143] virtualization: kvm guest
	I1122 01:01:54.644522  287016 out.go:179] * [auto-842088] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1122 01:01:54.646040  287016 notify.go:221] Checking for updates...
	I1122 01:01:54.646059  287016 out.go:179]   - MINIKUBE_LOCATION=21934
	I1122 01:01:54.647456  287016 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 01:01:54.648826  287016 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-244751/kubeconfig
	I1122 01:01:54.650430  287016 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-244751/.minikube
	I1122 01:01:54.651844  287016 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1122 01:01:54.653239  287016 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 01:01:54.654952  287016 config.go:182] Loaded profile config "cert-expiration-302431": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 01:01:54.655091  287016 config.go:182] Loaded profile config "cert-options-078413": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 01:01:54.655214  287016 config.go:182] Loaded profile config "guest-688997": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1122 01:01:54.655390  287016 config.go:182] Loaded profile config "pause-061914": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 01:01:54.655518  287016 driver.go:422] Setting default libvirt URI to qemu:///system
	I1122 01:01:54.689768  287016 out.go:179] * Using the kvm2 driver based on user configuration
	I1122 01:01:54.691337  287016 start.go:309] selected driver: kvm2
	I1122 01:01:54.691360  287016 start.go:930] validating driver "kvm2" against <nil>
	I1122 01:01:54.691376  287016 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 01:01:54.692200  287016 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1122 01:01:54.692508  287016 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 01:01:54.692536  287016 cni.go:84] Creating CNI manager for ""
	I1122 01:01:54.692598  287016 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1122 01:01:54.692611  287016 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1122 01:01:54.692669  287016 start.go:353] cluster config:
	{Name:auto-842088 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-842088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Auto
PauseInterval:1m0s}
	I1122 01:01:54.692828  287016 iso.go:125] acquiring lock: {Name:mkc83d3435f1eaa5a92358fc78f85b7d74048deb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 01:01:54.694781  287016 out.go:179] * Starting "auto-842088" primary control-plane node in "auto-842088" cluster
	I1122 01:01:52.781922  286531 out.go:252] * Updating the running kvm2 "pause-061914" VM ...
	I1122 01:01:52.781971  286531 machine.go:94] provisionDockerMachine start ...
	I1122 01:01:52.786583  286531 main.go:143] libmachine: domain pause-061914 has defined MAC address 52:54:00:75:6f:57 in network mk-pause-061914
	I1122 01:01:52.787148  286531 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:75:6f:57", ip: ""} in network mk-pause-061914: {Iface:virbr2 ExpiryTime:2025-11-22 02:00:27 +0000 UTC Type:0 Mac:52:54:00:75:6f:57 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:pause-061914 Clientid:01:52:54:00:75:6f:57}
	I1122 01:01:52.787177  286531 main.go:143] libmachine: domain pause-061914 has defined IP address 192.168.50.109 and MAC address 52:54:00:75:6f:57 in network mk-pause-061914
	I1122 01:01:52.787502  286531 main.go:143] libmachine: Using SSH client type: native
	I1122 01:01:52.787920  286531 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.50.109 22 <nil> <nil>}
	I1122 01:01:52.787937  286531 main.go:143] libmachine: About to run SSH command:
	hostname
	I1122 01:01:52.912193  286531 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-061914
	
	I1122 01:01:52.912238  286531 buildroot.go:166] provisioning hostname "pause-061914"
	I1122 01:01:52.917366  286531 main.go:143] libmachine: domain pause-061914 has defined MAC address 52:54:00:75:6f:57 in network mk-pause-061914
	I1122 01:01:52.920704  286531 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:75:6f:57", ip: ""} in network mk-pause-061914: {Iface:virbr2 ExpiryTime:2025-11-22 02:00:27 +0000 UTC Type:0 Mac:52:54:00:75:6f:57 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:pause-061914 Clientid:01:52:54:00:75:6f:57}
	I1122 01:01:52.920771  286531 main.go:143] libmachine: domain pause-061914 has defined IP address 192.168.50.109 and MAC address 52:54:00:75:6f:57 in network mk-pause-061914
	I1122 01:01:52.921137  286531 main.go:143] libmachine: Using SSH client type: native
	I1122 01:01:52.921471  286531 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.50.109 22 <nil> <nil>}
	I1122 01:01:52.921496  286531 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-061914 && echo "pause-061914" | sudo tee /etc/hostname
	I1122 01:01:53.062289  286531 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-061914
	
	I1122 01:01:53.066404  286531 main.go:143] libmachine: domain pause-061914 has defined MAC address 52:54:00:75:6f:57 in network mk-pause-061914
	I1122 01:01:53.066952  286531 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:75:6f:57", ip: ""} in network mk-pause-061914: {Iface:virbr2 ExpiryTime:2025-11-22 02:00:27 +0000 UTC Type:0 Mac:52:54:00:75:6f:57 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:pause-061914 Clientid:01:52:54:00:75:6f:57}
	I1122 01:01:53.066986  286531 main.go:143] libmachine: domain pause-061914 has defined IP address 192.168.50.109 and MAC address 52:54:00:75:6f:57 in network mk-pause-061914
	I1122 01:01:53.067208  286531 main.go:143] libmachine: Using SSH client type: native
	I1122 01:01:53.067465  286531 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.50.109 22 <nil> <nil>}
	I1122 01:01:53.067485  286531 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-061914' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-061914/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-061914' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1122 01:01:53.191962  286531 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1122 01:01:53.192001  286531 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21934-244751/.minikube CaCertPath:/home/jenkins/minikube-integration/21934-244751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21934-244751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21934-244751/.minikube}
	I1122 01:01:53.192060  286531 buildroot.go:174] setting up certificates
	I1122 01:01:53.192082  286531 provision.go:84] configureAuth start
	I1122 01:01:53.195571  286531 main.go:143] libmachine: domain pause-061914 has defined MAC address 52:54:00:75:6f:57 in network mk-pause-061914
	I1122 01:01:53.196240  286531 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:75:6f:57", ip: ""} in network mk-pause-061914: {Iface:virbr2 ExpiryTime:2025-11-22 02:00:27 +0000 UTC Type:0 Mac:52:54:00:75:6f:57 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:pause-061914 Clientid:01:52:54:00:75:6f:57}
	I1122 01:01:53.196281  286531 main.go:143] libmachine: domain pause-061914 has defined IP address 192.168.50.109 and MAC address 52:54:00:75:6f:57 in network mk-pause-061914
	I1122 01:01:53.199077  286531 main.go:143] libmachine: domain pause-061914 has defined MAC address 52:54:00:75:6f:57 in network mk-pause-061914
	I1122 01:01:53.199628  286531 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:75:6f:57", ip: ""} in network mk-pause-061914: {Iface:virbr2 ExpiryTime:2025-11-22 02:00:27 +0000 UTC Type:0 Mac:52:54:00:75:6f:57 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:pause-061914 Clientid:01:52:54:00:75:6f:57}
	I1122 01:01:53.199762  286531 main.go:143] libmachine: domain pause-061914 has defined IP address 192.168.50.109 and MAC address 52:54:00:75:6f:57 in network mk-pause-061914
	I1122 01:01:53.199983  286531 provision.go:143] copyHostCerts
	I1122 01:01:53.200044  286531 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-244751/.minikube/ca.pem, removing ...
	I1122 01:01:53.200070  286531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-244751/.minikube/ca.pem
	I1122 01:01:53.200159  286531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-244751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21934-244751/.minikube/ca.pem (1078 bytes)
	I1122 01:01:53.200303  286531 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-244751/.minikube/cert.pem, removing ...
	I1122 01:01:53.200317  286531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-244751/.minikube/cert.pem
	I1122 01:01:53.200353  286531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-244751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21934-244751/.minikube/cert.pem (1123 bytes)
	I1122 01:01:53.200448  286531 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-244751/.minikube/key.pem, removing ...
	I1122 01:01:53.200462  286531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-244751/.minikube/key.pem
	I1122 01:01:53.200498  286531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-244751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21934-244751/.minikube/key.pem (1679 bytes)
	I1122 01:01:53.200582  286531 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21934-244751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21934-244751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21934-244751/.minikube/certs/ca-key.pem org=jenkins.pause-061914 san=[127.0.0.1 192.168.50.109 localhost minikube pause-061914]
	I1122 01:01:53.234942  286531 provision.go:177] copyRemoteCerts
	I1122 01:01:53.235020  286531 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1122 01:01:53.237959  286531 main.go:143] libmachine: domain pause-061914 has defined MAC address 52:54:00:75:6f:57 in network mk-pause-061914
	I1122 01:01:53.238440  286531 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:75:6f:57", ip: ""} in network mk-pause-061914: {Iface:virbr2 ExpiryTime:2025-11-22 02:00:27 +0000 UTC Type:0 Mac:52:54:00:75:6f:57 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:pause-061914 Clientid:01:52:54:00:75:6f:57}
	I1122 01:01:53.238481  286531 main.go:143] libmachine: domain pause-061914 has defined IP address 192.168.50.109 and MAC address 52:54:00:75:6f:57 in network mk-pause-061914
	I1122 01:01:53.238671  286531 sshutil.go:53] new ssh client: &{IP:192.168.50.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/pause-061914/id_rsa Username:docker}
	I1122 01:01:53.333085  286531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1122 01:01:53.371797  286531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1122 01:01:53.409349  286531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1122 01:01:53.454653  286531 provision.go:87] duration metric: took 262.548258ms to configureAuth
	I1122 01:01:53.454713  286531 buildroot.go:189] setting minikube options for container-runtime
	I1122 01:01:53.455009  286531 config.go:182] Loaded profile config "pause-061914": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 01:01:53.459254  286531 main.go:143] libmachine: domain pause-061914 has defined MAC address 52:54:00:75:6f:57 in network mk-pause-061914
	I1122 01:01:53.459959  286531 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:75:6f:57", ip: ""} in network mk-pause-061914: {Iface:virbr2 ExpiryTime:2025-11-22 02:00:27 +0000 UTC Type:0 Mac:52:54:00:75:6f:57 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:pause-061914 Clientid:01:52:54:00:75:6f:57}
	I1122 01:01:53.459994  286531 main.go:143] libmachine: domain pause-061914 has defined IP address 192.168.50.109 and MAC address 52:54:00:75:6f:57 in network mk-pause-061914
	I1122 01:01:53.460246  286531 main.go:143] libmachine: Using SSH client type: native
	I1122 01:01:53.460443  286531 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.50.109 22 <nil> <nil>}
	I1122 01:01:53.460456  286531 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1122 01:01:53.833810  287000 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 01:01:53.833838  287000 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21934-244751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1122 01:01:53.833845  287000 cache.go:65] Caching tarball of preloaded images
	I1122 01:01:53.833963  287000 preload.go:238] Found /home/jenkins/minikube-integration/21934-244751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1122 01:01:53.833969  287000 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1122 01:01:53.834076  287000 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/cert-options-078413/config.json ...
	I1122 01:01:53.834093  287000 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/cert-options-078413/config.json: {Name:mkad777aa92dabb80f11b58fdff37bfafebd00c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 01:01:53.834232  287000 start.go:360] acquireMachinesLock for cert-options-078413: {Name:mk0193f6f5636a08cc7939c91649c5870e7698fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1122 01:01:59.307942  287000 start.go:364] duration metric: took 5.473672389s to acquireMachinesLock for "cert-options-078413"
	I1122 01:01:59.308000  287000 start.go:93] Provisioning new machine with config: &{Name:cert-options-078413 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.34.1 ClusterName:cert-options-078413 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8555 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8555 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1122 01:01:59.308116  287000 start.go:125] createHost starting for "" (driver="kvm2")
	I1122 01:01:54.695929  287016 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 01:01:54.695966  287016 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21934-244751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1122 01:01:54.695982  287016 cache.go:65] Caching tarball of preloaded images
	I1122 01:01:54.696192  287016 preload.go:238] Found /home/jenkins/minikube-integration/21934-244751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1122 01:01:54.696209  287016 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1122 01:01:54.696340  287016 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/auto-842088/config.json ...
	I1122 01:01:54.696367  287016 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/auto-842088/config.json: {Name:mka88bb470fa9d6ca1924c853e5501ec2117b1a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 01:01:54.696548  287016 start.go:360] acquireMachinesLock for auto-842088: {Name:mk0193f6f5636a08cc7939c91649c5870e7698fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1122 01:01:59.055775  286531 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1122 01:01:59.055803  286531 machine.go:97] duration metric: took 6.273820847s to provisionDockerMachine
	I1122 01:01:59.055816  286531 start.go:293] postStartSetup for "pause-061914" (driver="kvm2")
	I1122 01:01:59.055827  286531 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1122 01:01:59.055899  286531 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1122 01:01:59.058868  286531 main.go:143] libmachine: domain pause-061914 has defined MAC address 52:54:00:75:6f:57 in network mk-pause-061914
	I1122 01:01:59.059309  286531 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:75:6f:57", ip: ""} in network mk-pause-061914: {Iface:virbr2 ExpiryTime:2025-11-22 02:00:27 +0000 UTC Type:0 Mac:52:54:00:75:6f:57 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:pause-061914 Clientid:01:52:54:00:75:6f:57}
	I1122 01:01:59.059331  286531 main.go:143] libmachine: domain pause-061914 has defined IP address 192.168.50.109 and MAC address 52:54:00:75:6f:57 in network mk-pause-061914
	I1122 01:01:59.059479  286531 sshutil.go:53] new ssh client: &{IP:192.168.50.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/pause-061914/id_rsa Username:docker}
	I1122 01:01:59.144307  286531 ssh_runner.go:195] Run: cat /etc/os-release
	I1122 01:01:59.149833  286531 info.go:137] Remote host: Buildroot 2025.02
	I1122 01:01:59.149863  286531 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-244751/.minikube/addons for local assets ...
	I1122 01:01:59.149931  286531 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-244751/.minikube/files for local assets ...
	I1122 01:01:59.150022  286531 filesync.go:149] local asset: /home/jenkins/minikube-integration/21934-244751/.minikube/files/etc/ssl/certs/2506642.pem -> 2506642.pem in /etc/ssl/certs
	I1122 01:01:59.150114  286531 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1122 01:01:59.162966  286531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/files/etc/ssl/certs/2506642.pem --> /etc/ssl/certs/2506642.pem (1708 bytes)
	I1122 01:01:59.197890  286531 start.go:296] duration metric: took 142.054363ms for postStartSetup
	I1122 01:01:59.197939  286531 fix.go:56] duration metric: took 6.420633666s for fixHost
	I1122 01:01:59.201542  286531 main.go:143] libmachine: domain pause-061914 has defined MAC address 52:54:00:75:6f:57 in network mk-pause-061914
	I1122 01:01:59.202036  286531 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:75:6f:57", ip: ""} in network mk-pause-061914: {Iface:virbr2 ExpiryTime:2025-11-22 02:00:27 +0000 UTC Type:0 Mac:52:54:00:75:6f:57 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:pause-061914 Clientid:01:52:54:00:75:6f:57}
	I1122 01:01:59.202065  286531 main.go:143] libmachine: domain pause-061914 has defined IP address 192.168.50.109 and MAC address 52:54:00:75:6f:57 in network mk-pause-061914
	I1122 01:01:59.202287  286531 main.go:143] libmachine: Using SSH client type: native
	I1122 01:01:59.202539  286531 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.50.109 22 <nil> <nil>}
	I1122 01:01:59.202554  286531 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1122 01:01:59.307794  286531 main.go:143] libmachine: SSH cmd err, output: <nil>: 1763773319.301377360
	
	I1122 01:01:59.307820  286531 fix.go:216] guest clock: 1763773319.301377360
	I1122 01:01:59.307829  286531 fix.go:229] Guest: 2025-11-22 01:01:59.30137736 +0000 UTC Remote: 2025-11-22 01:01:59.197944177 +0000 UTC m=+48.595022327 (delta=103.433183ms)
	I1122 01:01:59.307846  286531 fix.go:200] guest clock delta is within tolerance: 103.433183ms
	I1122 01:01:59.307851  286531 start.go:83] releasing machines lock for "pause-061914", held for 6.530577312s
	I1122 01:01:59.311365  286531 main.go:143] libmachine: domain pause-061914 has defined MAC address 52:54:00:75:6f:57 in network mk-pause-061914
	I1122 01:01:59.311883  286531 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:75:6f:57", ip: ""} in network mk-pause-061914: {Iface:virbr2 ExpiryTime:2025-11-22 02:00:27 +0000 UTC Type:0 Mac:52:54:00:75:6f:57 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:pause-061914 Clientid:01:52:54:00:75:6f:57}
	I1122 01:01:59.311917  286531 main.go:143] libmachine: domain pause-061914 has defined IP address 192.168.50.109 and MAC address 52:54:00:75:6f:57 in network mk-pause-061914
	I1122 01:01:59.312497  286531 ssh_runner.go:195] Run: cat /version.json
	I1122 01:01:59.312552  286531 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1122 01:01:59.316222  286531 main.go:143] libmachine: domain pause-061914 has defined MAC address 52:54:00:75:6f:57 in network mk-pause-061914
	I1122 01:01:59.316248  286531 main.go:143] libmachine: domain pause-061914 has defined MAC address 52:54:00:75:6f:57 in network mk-pause-061914
	I1122 01:01:59.316715  286531 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:75:6f:57", ip: ""} in network mk-pause-061914: {Iface:virbr2 ExpiryTime:2025-11-22 02:00:27 +0000 UTC Type:0 Mac:52:54:00:75:6f:57 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:pause-061914 Clientid:01:52:54:00:75:6f:57}
	I1122 01:01:59.316738  286531 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:75:6f:57", ip: ""} in network mk-pause-061914: {Iface:virbr2 ExpiryTime:2025-11-22 02:00:27 +0000 UTC Type:0 Mac:52:54:00:75:6f:57 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:pause-061914 Clientid:01:52:54:00:75:6f:57}
	I1122 01:01:59.316754  286531 main.go:143] libmachine: domain pause-061914 has defined IP address 192.168.50.109 and MAC address 52:54:00:75:6f:57 in network mk-pause-061914
	I1122 01:01:59.316760  286531 main.go:143] libmachine: domain pause-061914 has defined IP address 192.168.50.109 and MAC address 52:54:00:75:6f:57 in network mk-pause-061914
	I1122 01:01:59.316972  286531 sshutil.go:53] new ssh client: &{IP:192.168.50.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/pause-061914/id_rsa Username:docker}
	I1122 01:01:59.317064  286531 sshutil.go:53] new ssh client: &{IP:192.168.50.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/pause-061914/id_rsa Username:docker}
	I1122 01:01:59.401045  286531 ssh_runner.go:195] Run: systemctl --version
	I1122 01:01:59.430612  286531 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1122 01:01:59.588751  286531 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1122 01:01:59.596819  286531 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1122 01:01:59.596897  286531 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1122 01:01:59.609613  286531 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1122 01:01:59.609648  286531 start.go:496] detecting cgroup driver to use...
	I1122 01:01:59.609742  286531 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1122 01:01:59.632082  286531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1122 01:01:59.652774  286531 docker.go:218] disabling cri-docker service (if available) ...
	I1122 01:01:59.652834  286531 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1122 01:01:59.676112  286531 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1122 01:01:59.694497  286531 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1122 01:01:59.898069  286531 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1122 01:02:00.086041  286531 docker.go:234] disabling docker service ...
	I1122 01:02:00.086152  286531 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1122 01:02:00.118770  286531 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1122 01:02:00.136289  286531 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1122 01:02:00.323541  286531 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1122 01:02:00.512930  286531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1122 01:02:00.530532  286531 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1122 01:02:00.557904  286531 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1122 01:02:00.558019  286531 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 01:02:00.576318  286531 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1122 01:02:00.576392  286531 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 01:02:00.593342  286531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 01:02:00.612331  286531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 01:02:00.628381  286531 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1122 01:02:00.642989  286531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 01:02:00.661289  286531 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 01:02:00.676957  286531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 01:02:00.690604  286531 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1122 01:02:00.702947  286531 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1122 01:02:00.715351  286531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 01:02:00.903201  286531 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1122 01:02:01.933297  286531 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.030048412s)
	I1122 01:02:01.933331  286531 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1122 01:02:01.933392  286531 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1122 01:02:01.939789  286531 start.go:564] Will wait 60s for crictl version
	I1122 01:02:01.939864  286531 ssh_runner.go:195] Run: which crictl
	I1122 01:02:01.945127  286531 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1122 01:02:01.984303  286531 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1122 01:02:01.984422  286531 ssh_runner.go:195] Run: crio --version
	I1122 01:02:02.021312  286531 ssh_runner.go:195] Run: crio --version
	I1122 01:02:02.059721  286531 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1122 01:01:59.310061  287000 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1122 01:01:59.310297  287000 start.go:159] libmachine.API.Create for "cert-options-078413" (driver="kvm2")
	I1122 01:01:59.310326  287000 client.go:173] LocalClient.Create starting
	I1122 01:01:59.310399  287000 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21934-244751/.minikube/certs/ca.pem
	I1122 01:01:59.310429  287000 main.go:143] libmachine: Decoding PEM data...
	I1122 01:01:59.310444  287000 main.go:143] libmachine: Parsing certificate...
	I1122 01:01:59.310510  287000 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21934-244751/.minikube/certs/cert.pem
	I1122 01:01:59.310531  287000 main.go:143] libmachine: Decoding PEM data...
	I1122 01:01:59.310540  287000 main.go:143] libmachine: Parsing certificate...
	I1122 01:01:59.310882  287000 main.go:143] libmachine: creating domain...
	I1122 01:01:59.310897  287000 main.go:143] libmachine: creating network...
	I1122 01:01:59.312514  287000 main.go:143] libmachine: found existing default network
	I1122 01:01:59.312802  287000 main.go:143] libmachine: <network connections='3'>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1122 01:01:59.314321  287000 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c3b800}
	I1122 01:01:59.314421  287000 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-cert-options-078413</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1122 01:01:59.321269  287000 main.go:143] libmachine: creating private network mk-cert-options-078413 192.168.39.0/24...
	I1122 01:01:59.405633  287000 main.go:143] libmachine: private network mk-cert-options-078413 192.168.39.0/24 created
	I1122 01:01:59.405953  287000 main.go:143] libmachine: <network>
	  <name>mk-cert-options-078413</name>
	  <uuid>d87e1c47-4db9-4bef-bce8-0638a2ed3d0d</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:ea:2a:4e'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1122 01:01:59.405980  287000 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/21934-244751/.minikube/machines/cert-options-078413 ...
	I1122 01:01:59.406012  287000 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/21934-244751/.minikube/cache/iso/amd64/minikube-v1.37.0-1763503576-21924-amd64.iso
	I1122 01:01:59.406019  287000 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/21934-244751/.minikube
	I1122 01:01:59.406082  287000 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/21934-244751/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21934-244751/.minikube/cache/iso/amd64/minikube-v1.37.0-1763503576-21924-amd64.iso...
	I1122 01:01:59.683005  287000 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/21934-244751/.minikube/machines/cert-options-078413/id_rsa...
	I1122 01:01:59.763010  287000 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/21934-244751/.minikube/machines/cert-options-078413/cert-options-078413.rawdisk...
	I1122 01:01:59.763055  287000 main.go:143] libmachine: Writing magic tar header
	I1122 01:01:59.763087  287000 main.go:143] libmachine: Writing SSH key tar header
	I1122 01:01:59.763207  287000 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/21934-244751/.minikube/machines/cert-options-078413 ...
	I1122 01:01:59.763307  287000 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21934-244751/.minikube/machines/cert-options-078413
	I1122 01:01:59.763337  287000 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21934-244751/.minikube/machines/cert-options-078413 (perms=drwx------)
	I1122 01:01:59.763351  287000 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21934-244751/.minikube/machines
	I1122 01:01:59.763366  287000 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21934-244751/.minikube/machines (perms=drwxr-xr-x)
	I1122 01:01:59.763380  287000 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21934-244751/.minikube
	I1122 01:01:59.763392  287000 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21934-244751/.minikube (perms=drwxr-xr-x)
	I1122 01:01:59.763406  287000 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21934-244751
	I1122 01:01:59.763419  287000 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21934-244751 (perms=drwxrwxr-x)
	I1122 01:01:59.763431  287000 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1122 01:01:59.763443  287000 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1122 01:01:59.763453  287000 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1122 01:01:59.763463  287000 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1122 01:01:59.763474  287000 main.go:143] libmachine: checking permissions on dir: /home
	I1122 01:01:59.763507  287000 main.go:143] libmachine: skipping /home - not owner
	I1122 01:01:59.763515  287000 main.go:143] libmachine: defining domain...
	I1122 01:01:59.765275  287000 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>cert-options-078413</name>
	  <memory unit='MiB'>3072</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/21934-244751/.minikube/machines/cert-options-078413/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/21934-244751/.minikube/machines/cert-options-078413/cert-options-078413.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-cert-options-078413'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1122 01:01:59.770651  287000 main.go:143] libmachine: domain cert-options-078413 has defined MAC address 52:54:00:a3:bc:84 in network default
	I1122 01:01:59.771476  287000 main.go:143] libmachine: domain cert-options-078413 has defined MAC address 52:54:00:87:07:77 in network mk-cert-options-078413
	I1122 01:01:59.771487  287000 main.go:143] libmachine: starting domain...
	I1122 01:01:59.771492  287000 main.go:143] libmachine: ensuring networks are active...
	I1122 01:01:59.772340  287000 main.go:143] libmachine: Ensuring network default is active
	I1122 01:01:59.772845  287000 main.go:143] libmachine: Ensuring network mk-cert-options-078413 is active
	I1122 01:01:59.773540  287000 main.go:143] libmachine: getting domain XML...
	I1122 01:01:59.774774  287000 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>cert-options-078413</name>
	  <uuid>7d5b9409-838e-452d-a626-9c2952f42d75</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21934-244751/.minikube/machines/cert-options-078413/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21934-244751/.minikube/machines/cert-options-078413/cert-options-078413.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:87:07:77'/>
	      <source network='mk-cert-options-078413'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:a3:bc:84'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1122 01:02:01.147114  287000 main.go:143] libmachine: waiting for domain to start...
	I1122 01:02:01.148757  287000 main.go:143] libmachine: domain is now running
	I1122 01:02:01.148767  287000 main.go:143] libmachine: waiting for IP...
	I1122 01:02:01.149895  287000 main.go:143] libmachine: domain cert-options-078413 has defined MAC address 52:54:00:87:07:77 in network mk-cert-options-078413
	I1122 01:02:01.150576  287000 main.go:143] libmachine: no network interface addresses found for domain cert-options-078413 (source=lease)
	I1122 01:02:01.150587  287000 main.go:143] libmachine: trying to list again with source=arp
	I1122 01:02:01.150978  287000 main.go:143] libmachine: unable to find current IP address of domain cert-options-078413 in network mk-cert-options-078413 (interfaces detected: [])
	I1122 01:02:01.151034  287000 retry.go:31] will retry after 295.838225ms: waiting for domain to come up
	I1122 01:02:01.448558  287000 main.go:143] libmachine: domain cert-options-078413 has defined MAC address 52:54:00:87:07:77 in network mk-cert-options-078413
	I1122 01:02:01.449352  287000 main.go:143] libmachine: no network interface addresses found for domain cert-options-078413 (source=lease)
	I1122 01:02:01.449362  287000 main.go:143] libmachine: trying to list again with source=arp
	I1122 01:02:01.450008  287000 main.go:143] libmachine: unable to find current IP address of domain cert-options-078413 in network mk-cert-options-078413 (interfaces detected: [])
	I1122 01:02:01.450039  287000 retry.go:31] will retry after 357.632666ms: waiting for domain to come up
	I1122 01:02:01.810174  287000 main.go:143] libmachine: domain cert-options-078413 has defined MAC address 52:54:00:87:07:77 in network mk-cert-options-078413
	I1122 01:02:01.811039  287000 main.go:143] libmachine: no network interface addresses found for domain cert-options-078413 (source=lease)
	I1122 01:02:01.811051  287000 main.go:143] libmachine: trying to list again with source=arp
	I1122 01:02:01.811520  287000 main.go:143] libmachine: unable to find current IP address of domain cert-options-078413 in network mk-cert-options-078413 (interfaces detected: [])
	I1122 01:02:01.811570  287000 retry.go:31] will retry after 398.06809ms: waiting for domain to come up
	I1122 01:02:02.211520  287000 main.go:143] libmachine: domain cert-options-078413 has defined MAC address 52:54:00:87:07:77 in network mk-cert-options-078413
	I1122 01:02:02.212327  287000 main.go:143] libmachine: no network interface addresses found for domain cert-options-078413 (source=lease)
	I1122 01:02:02.212342  287000 main.go:143] libmachine: trying to list again with source=arp
	I1122 01:02:02.212807  287000 main.go:143] libmachine: unable to find current IP address of domain cert-options-078413 in network mk-cert-options-078413 (interfaces detected: [])
	I1122 01:02:02.212848  287000 retry.go:31] will retry after 537.983109ms: waiting for domain to come up
	I1122 01:02:02.752706  287000 main.go:143] libmachine: domain cert-options-078413 has defined MAC address 52:54:00:87:07:77 in network mk-cert-options-078413
	I1122 01:02:02.753445  287000 main.go:143] libmachine: no network interface addresses found for domain cert-options-078413 (source=lease)
	I1122 01:02:02.753456  287000 main.go:143] libmachine: trying to list again with source=arp
	I1122 01:02:02.753891  287000 main.go:143] libmachine: unable to find current IP address of domain cert-options-078413 in network mk-cert-options-078413 (interfaces detected: [])
	I1122 01:02:02.753923  287000 retry.go:31] will retry after 738.906106ms: waiting for domain to come up
	I1122 01:02:03.495202  287000 main.go:143] libmachine: domain cert-options-078413 has defined MAC address 52:54:00:87:07:77 in network mk-cert-options-078413
	I1122 01:02:03.495960  287000 main.go:143] libmachine: no network interface addresses found for domain cert-options-078413 (source=lease)
	I1122 01:02:03.495973  287000 main.go:143] libmachine: trying to list again with source=arp
	I1122 01:02:03.496466  287000 main.go:143] libmachine: unable to find current IP address of domain cert-options-078413 in network mk-cert-options-078413 (interfaces detected: [])
	I1122 01:02:03.496507  287000 retry.go:31] will retry after 631.598647ms: waiting for domain to come up
	I1122 01:02:02.064554  286531 main.go:143] libmachine: domain pause-061914 has defined MAC address 52:54:00:75:6f:57 in network mk-pause-061914
	I1122 01:02:02.065115  286531 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:75:6f:57", ip: ""} in network mk-pause-061914: {Iface:virbr2 ExpiryTime:2025-11-22 02:00:27 +0000 UTC Type:0 Mac:52:54:00:75:6f:57 Iaid: IPaddr:192.168.50.109 Prefix:24 Hostname:pause-061914 Clientid:01:52:54:00:75:6f:57}
	I1122 01:02:02.065168  286531 main.go:143] libmachine: domain pause-061914 has defined IP address 192.168.50.109 and MAC address 52:54:00:75:6f:57 in network mk-pause-061914
	I1122 01:02:02.065441  286531 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1122 01:02:02.070962  286531 kubeadm.go:884] updating cluster {Name:pause-061914 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1
ClusterName:pause-061914 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.109 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvid
ia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1122 01:02:02.071150  286531 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 01:02:02.071217  286531 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 01:02:02.118013  286531 crio.go:514] all images are preloaded for cri-o runtime.
	I1122 01:02:02.118039  286531 crio.go:433] Images already preloaded, skipping extraction
	I1122 01:02:02.118104  286531 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 01:02:02.153893  286531 crio.go:514] all images are preloaded for cri-o runtime.
	I1122 01:02:02.153920  286531 cache_images.go:86] Images are preloaded, skipping loading
	I1122 01:02:02.153929  286531 kubeadm.go:935] updating node { 192.168.50.109 8443 v1.34.1 crio true true} ...
	I1122 01:02:02.154050  286531 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-061914 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.109
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-061914 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1122 01:02:02.154143  286531 ssh_runner.go:195] Run: crio config
	I1122 01:02:02.217383  286531 cni.go:84] Creating CNI manager for ""
	I1122 01:02:02.217410  286531 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1122 01:02:02.217434  286531 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1122 01:02:02.217461  286531 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.109 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-061914 NodeName:pause-061914 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.109"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.109 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1122 01:02:02.217639  286531 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.109
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-061914"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.109"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.109"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1122 01:02:02.217747  286531 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1122 01:02:02.231942  286531 binaries.go:51] Found k8s binaries, skipping transfer
	I1122 01:02:02.232019  286531 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1122 01:02:02.247847  286531 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1122 01:02:02.275909  286531 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1122 01:02:02.308234  286531 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1122 01:02:02.335424  286531 ssh_runner.go:195] Run: grep 192.168.50.109	control-plane.minikube.internal$ /etc/hosts
	I1122 01:02:02.340704  286531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 01:02:02.528058  286531 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 01:02:02.547962  286531 certs.go:69] Setting up /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/pause-061914 for IP: 192.168.50.109
	I1122 01:02:02.547992  286531 certs.go:195] generating shared ca certs ...
	I1122 01:02:02.548015  286531 certs.go:227] acquiring lock for ca certs: {Name:mk43fa762c6315605485300e07cf83f8f357f8dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 01:02:02.548215  286531 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21934-244751/.minikube/ca.key
	I1122 01:02:02.548267  286531 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21934-244751/.minikube/proxy-client-ca.key
	I1122 01:02:02.548284  286531 certs.go:257] generating profile certs ...
	I1122 01:02:02.548366  286531 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/pause-061914/client.key
	I1122 01:02:02.548436  286531 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/pause-061914/apiserver.key.872d2023
	I1122 01:02:02.548495  286531 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/pause-061914/proxy-client.key
	I1122 01:02:02.548628  286531 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-244751/.minikube/certs/250664.pem (1338 bytes)
	W1122 01:02:02.548665  286531 certs.go:480] ignoring /home/jenkins/minikube-integration/21934-244751/.minikube/certs/250664_empty.pem, impossibly tiny 0 bytes
	I1122 01:02:02.548690  286531 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-244751/.minikube/certs/ca-key.pem (1675 bytes)
	I1122 01:02:02.548718  286531 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-244751/.minikube/certs/ca.pem (1078 bytes)
	I1122 01:02:02.548744  286531 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-244751/.minikube/certs/cert.pem (1123 bytes)
	I1122 01:02:02.548767  286531 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-244751/.minikube/certs/key.pem (1679 bytes)
	I1122 01:02:02.548806  286531 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-244751/.minikube/files/etc/ssl/certs/2506642.pem (1708 bytes)
	I1122 01:02:02.549488  286531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1122 01:02:02.583257  286531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1122 01:02:02.620647  286531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1122 01:02:02.655221  286531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1122 01:02:02.690536  286531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/pause-061914/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1122 01:02:02.823705  286531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/pause-061914/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1122 01:02:02.881758  286531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/pause-061914/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1122 01:02:03.005387  286531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/pause-061914/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1122 01:02:03.081834  286531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/certs/250664.pem --> /usr/share/ca-certificates/250664.pem (1338 bytes)
	I1122 01:02:03.191760  286531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/files/etc/ssl/certs/2506642.pem --> /usr/share/ca-certificates/2506642.pem (1708 bytes)
	I1122 01:02:03.320448  286531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-244751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1122 01:02:03.368160  286531 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1122 01:02:03.420002  286531 ssh_runner.go:195] Run: openssl version
	I1122 01:02:03.435573  286531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/250664.pem && ln -fs /usr/share/ca-certificates/250664.pem /etc/ssl/certs/250664.pem"
	I1122 01:02:03.468631  286531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/250664.pem
	I1122 01:02:03.479688  286531 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 23:59 /usr/share/ca-certificates/250664.pem
	I1122 01:02:03.479761  286531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/250664.pem
	I1122 01:02:03.504518  286531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/250664.pem /etc/ssl/certs/51391683.0"
	I1122 01:02:03.537348  286531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2506642.pem && ln -fs /usr/share/ca-certificates/2506642.pem /etc/ssl/certs/2506642.pem"
	I1122 01:02:03.577339  286531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2506642.pem
	I1122 01:02:03.595963  286531 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 23:59 /usr/share/ca-certificates/2506642.pem
	I1122 01:02:03.596061  286531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2506642.pem
	I1122 01:02:03.621538  286531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2506642.pem /etc/ssl/certs/3ec20f2e.0"
	I1122 01:02:03.647189  286531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1122 01:02:03.676624  286531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1122 01:02:03.690442  286531 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 23:47 /usr/share/ca-certificates/minikubeCA.pem
	I1122 01:02:03.690541  286531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1122 01:02:03.705947  286531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1122 01:02:03.802498  286531 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1122 01:02:03.819857  286531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1122 01:02:03.845575  286531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1122 01:02:03.864429  286531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1122 01:02:03.885738  286531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1122 01:02:03.902409  286531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1122 01:02:03.918123  286531 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1122 01:02:03.942047  286531 kubeadm.go:401] StartCluster: {Name:pause-061914 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Cl
usterName:pause-061914 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.109 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-
gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 01:02:03.942202  286531 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1122 01:02:03.942298  286531 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1122 01:02:04.051885  286531 cri.go:89] found id: "161be095ba10d1572ff29bf3c719de6b58e32850e4287c96579d96e406e85caf"
	I1122 01:02:04.051917  286531 cri.go:89] found id: "a64d5f577f4a5692d8ea033bff031ad1e3f4af70745d1a49998e0c78a5c8d60e"
	I1122 01:02:04.051924  286531 cri.go:89] found id: "8781303c59c3d8f47b1c02fafb2fd09f80e5a65d57020fffa3037fea8ee2336d"
	I1122 01:02:04.051929  286531 cri.go:89] found id: "2b6f8a854e8133f151047122c37b7742c38c6d4c8964523664d83b89840d9ebc"
	I1122 01:02:04.051934  286531 cri.go:89] found id: "5e45f9f3a0e7e09a7dfda889ebe3adc52b152c6320cab9b7e828a9b307f3c45d"
	I1122 01:02:04.051939  286531 cri.go:89] found id: "944822276ab7ec36d828a05439a1fc8a89fa62323892f288fcfb237a9faaad90"
	I1122 01:02:04.051943  286531 cri.go:89] found id: "630da37e2d6572c7f8b7e6960ed3a9cdea7904a6b2a15dcfe577d09a2a276a1f"
	I1122 01:02:04.051946  286531 cri.go:89] found id: "3394b89f460291950559050d76177d9e67c8e1c83848e7e35826b51d65359566"
	I1122 01:02:04.051951  286531 cri.go:89] found id: "8aebcda395085a6e4ac25cf6620bd7f12cf1fedd4f1a58f8957c5ddc9cef47a0"
	I1122 01:02:04.051962  286531 cri.go:89] found id: "5b6ac30317cf466182d6a0241294a41524c6707a11669d34ad4b42285bda37bb"
	I1122 01:02:04.051968  286531 cri.go:89] found id: ""
	I1122 01:02:04.052033  286531 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-061914 -n pause-061914
helpers_test.go:269: (dbg) Run:  kubectl --context pause-061914 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-061914 -n pause-061914
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-061914 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-061914 logs -n 25: (1.746020548s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                          ARGS                                                                                                           │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p kubernetes-upgrade-450435 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio                                                                                                         │ kubernetes-upgrade-450435 │ jenkins │ v1.37.0 │ 22 Nov 25 00:59 UTC │                     │
	│ start   │ -p kubernetes-upgrade-450435 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                  │ kubernetes-upgrade-450435 │ jenkins │ v1.37.0 │ 22 Nov 25 00:59 UTC │ 22 Nov 25 01:00 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile stopped-upgrade-504824 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker                                                             │ stopped-upgrade-504824    │ jenkins │ v1.37.0 │ 22 Nov 25 00:59 UTC │                     │
	│ delete  │ -p stopped-upgrade-504824                                                                                                                                                                                               │ stopped-upgrade-504824    │ jenkins │ v1.37.0 │ 22 Nov 25 00:59 UTC │ 22 Nov 25 00:59 UTC │
	│ start   │ -p pause-061914 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio                                                                                                                 │ pause-061914              │ jenkins │ v1.37.0 │ 22 Nov 25 00:59 UTC │ 22 Nov 25 01:01 UTC │
	│ ssh     │ -p NoKubernetes-061445 sudo systemctl is-active --quiet service kubelet                                                                                                                                                 │ NoKubernetes-061445       │ jenkins │ v1.37.0 │ 22 Nov 25 00:59 UTC │                     │
	│ stop    │ -p NoKubernetes-061445                                                                                                                                                                                                  │ NoKubernetes-061445       │ jenkins │ v1.37.0 │ 22 Nov 25 00:59 UTC │ 22 Nov 25 01:00 UTC │
	│ start   │ -p NoKubernetes-061445 --driver=kvm2  --container-runtime=crio                                                                                                                                                          │ NoKubernetes-061445       │ jenkins │ v1.37.0 │ 22 Nov 25 01:00 UTC │ 22 Nov 25 01:00 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile running-upgrade-702170 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker                                                             │ running-upgrade-702170    │ jenkins │ v1.37.0 │ 22 Nov 25 01:00 UTC │                     │
	│ delete  │ -p running-upgrade-702170                                                                                                                                                                                               │ running-upgrade-702170    │ jenkins │ v1.37.0 │ 22 Nov 25 01:00 UTC │ 22 Nov 25 01:00 UTC │
	│ start   │ -p cert-expiration-302431 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio                                                                                                                    │ cert-expiration-302431    │ jenkins │ v1.37.0 │ 22 Nov 25 01:00 UTC │ 22 Nov 25 01:01 UTC │
	│ delete  │ -p kubernetes-upgrade-450435                                                                                                                                                                                            │ kubernetes-upgrade-450435 │ jenkins │ v1.37.0 │ 22 Nov 25 01:00 UTC │ 22 Nov 25 01:00 UTC │
	│ start   │ -p force-systemd-flag-555638 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio                                                                                               │ force-systemd-flag-555638 │ jenkins │ v1.37.0 │ 22 Nov 25 01:00 UTC │ 22 Nov 25 01:01 UTC │
	│ ssh     │ -p NoKubernetes-061445 sudo systemctl is-active --quiet service kubelet                                                                                                                                                 │ NoKubernetes-061445       │ jenkins │ v1.37.0 │ 22 Nov 25 01:00 UTC │                     │
	│ delete  │ -p NoKubernetes-061445                                                                                                                                                                                                  │ NoKubernetes-061445       │ jenkins │ v1.37.0 │ 22 Nov 25 01:00 UTC │ 22 Nov 25 01:00 UTC │
	│ start   │ -p guest-688997 --no-kubernetes --driver=kvm2  --container-runtime=crio                                                                                                                                                 │ guest-688997              │ jenkins │ v1.37.0 │ 22 Nov 25 01:00 UTC │ 22 Nov 25 01:01 UTC │
	│ start   │ -p pause-061914 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                                                                          │ pause-061914              │ jenkins │ v1.37.0 │ 22 Nov 25 01:01 UTC │ 22 Nov 25 01:02 UTC │
	│ ssh     │ force-systemd-flag-555638 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                    │ force-systemd-flag-555638 │ jenkins │ v1.37.0 │ 22 Nov 25 01:01 UTC │ 22 Nov 25 01:01 UTC │
	│ delete  │ -p force-systemd-flag-555638                                                                                                                                                                                            │ force-systemd-flag-555638 │ jenkins │ v1.37.0 │ 22 Nov 25 01:01 UTC │ 22 Nov 25 01:01 UTC │
	│ start   │ -p cert-options-078413 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio │ cert-options-078413       │ jenkins │ v1.37.0 │ 22 Nov 25 01:01 UTC │ 22 Nov 25 01:02 UTC │
	│ start   │ -p auto-842088 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio                                                                                                   │ auto-842088               │ jenkins │ v1.37.0 │ 22 Nov 25 01:01 UTC │                     │
	│ ssh     │ cert-options-078413 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                             │ cert-options-078413       │ jenkins │ v1.37.0 │ 22 Nov 25 01:02 UTC │ 22 Nov 25 01:02 UTC │
	│ ssh     │ -p cert-options-078413 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                           │ cert-options-078413       │ jenkins │ v1.37.0 │ 22 Nov 25 01:02 UTC │ 22 Nov 25 01:02 UTC │
	│ delete  │ -p cert-options-078413                                                                                                                                                                                                  │ cert-options-078413       │ jenkins │ v1.37.0 │ 22 Nov 25 01:02 UTC │ 22 Nov 25 01:02 UTC │
	│ start   │ -p kindnet-842088 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio                                                                                  │ kindnet-842088            │ jenkins │ v1.37.0 │ 22 Nov 25 01:02 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/22 01:02:42
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1122 01:02:42.165873  287676 out.go:360] Setting OutFile to fd 1 ...
	I1122 01:02:42.165994  287676 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 01:02:42.166002  287676 out.go:374] Setting ErrFile to fd 2...
	I1122 01:02:42.166009  287676 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 01:02:42.166299  287676 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-244751/.minikube/bin
	I1122 01:02:42.166904  287676 out.go:368] Setting JSON to false
	I1122 01:02:42.168144  287676 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":31490,"bootTime":1763741872,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1122 01:02:42.168240  287676 start.go:143] virtualization: kvm guest
	I1122 01:02:42.170623  287676 out.go:179] * [kindnet-842088] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1122 01:02:42.172288  287676 out.go:179]   - MINIKUBE_LOCATION=21934
	I1122 01:02:42.172526  287676 notify.go:221] Checking for updates...
	I1122 01:02:42.174895  287676 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 01:02:42.176374  287676 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-244751/kubeconfig
	I1122 01:02:42.177649  287676 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-244751/.minikube
	I1122 01:02:42.182371  287676 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1122 01:02:42.183782  287676 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 01:02:42.185579  287676 config.go:182] Loaded profile config "auto-842088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 01:02:42.185724  287676 config.go:182] Loaded profile config "cert-expiration-302431": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 01:02:42.185824  287676 config.go:182] Loaded profile config "guest-688997": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1122 01:02:42.186006  287676 config.go:182] Loaded profile config "pause-061914": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 01:02:42.186157  287676 driver.go:422] Setting default libvirt URI to qemu:///system
	I1122 01:02:42.238438  287676 out.go:179] * Using the kvm2 driver based on user configuration
	I1122 01:02:42.239712  287676 start.go:309] selected driver: kvm2
	I1122 01:02:42.239734  287676 start.go:930] validating driver "kvm2" against <nil>
	I1122 01:02:42.239752  287676 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 01:02:42.240919  287676 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1122 01:02:42.241374  287676 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 01:02:42.241412  287676 cni.go:84] Creating CNI manager for "kindnet"
	I1122 01:02:42.241421  287676 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1122 01:02:42.241495  287676 start.go:353] cluster config:
	{Name:kindnet-842088 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-842088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1122 01:02:42.241643  287676 iso.go:125] acquiring lock: {Name:mkc83d3435f1eaa5a92358fc78f85b7d74048deb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 01:02:42.243740  287676 out.go:179] * Starting "kindnet-842088" primary control-plane node in "kindnet-842088" cluster
	I1122 01:02:42.244994  287676 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 01:02:42.245034  287676 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21934-244751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1122 01:02:42.245062  287676 cache.go:65] Caching tarball of preloaded images
	I1122 01:02:42.245176  287676 preload.go:238] Found /home/jenkins/minikube-integration/21934-244751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1122 01:02:42.245189  287676 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1122 01:02:42.245334  287676 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/kindnet-842088/config.json ...
	I1122 01:02:42.245363  287676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/kindnet-842088/config.json: {Name:mkfc6fe1f52fd931912ed998d4e670aa814c1813 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 01:02:42.245551  287676 start.go:360] acquireMachinesLock for kindnet-842088: {Name:mk0193f6f5636a08cc7939c91649c5870e7698fe Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1122 01:02:42.245587  287676 start.go:364] duration metric: took 15.809µs to acquireMachinesLock for "kindnet-842088"
	I1122 01:02:42.245608  287676 start.go:93] Provisioning new machine with config: &{Name:kindnet-842088 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.34.1 ClusterName:kindnet-842088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1122 01:02:42.245717  287676 start.go:125] createHost starting for "" (driver="kvm2")
	
	
	==> CRI-O <==
	Nov 22 01:02:44 pause-061914 crio[2805]: time="2025-11-22 01:02:44.141312577Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ba871d62-65c1-42d0-b7a4-79bc6e824449 name=/runtime.v1.RuntimeService/Version
	Nov 22 01:02:44 pause-061914 crio[2805]: time="2025-11-22 01:02:44.143103672Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b6fd73c8-d6b5-4412-baf3-9b732a51e79e name=/runtime.v1.ImageService/ImageFsInfo
	Nov 22 01:02:44 pause-061914 crio[2805]: time="2025-11-22 01:02:44.144141881Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763773364144118291,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b6fd73c8-d6b5-4412-baf3-9b732a51e79e name=/runtime.v1.ImageService/ImageFsInfo
	Nov 22 01:02:44 pause-061914 crio[2805]: time="2025-11-22 01:02:44.145133301Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0c3fe843-8980-4e6b-8c33-a100474c4df1 name=/runtime.v1.RuntimeService/ListContainers
	Nov 22 01:02:44 pause-061914 crio[2805]: time="2025-11-22 01:02:44.145252114Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0c3fe843-8980-4e6b-8c33-a100474c4df1 name=/runtime.v1.RuntimeService/ListContainers
	Nov 22 01:02:44 pause-061914 crio[2805]: time="2025-11-22 01:02:44.145481722Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:753f6b5e3bff1a7b3f48a54b1e7194d2cfe1015b67c0d3d6ae5585dac60de4f9,PodSandboxId:c9e74ef58eb5d1e3be4b1b88b7e50c6ca49d09ad057747f8cc719e2be826e03e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763773341542902675,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-061914,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a48c64319cf60fa0b845fa27ccb7a9c2,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,
\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33dc398a1236a471359e5fb8db17cdae11a8fcf9c3920e4778c6d680e3899a21,PodSandboxId:2ddab47f3440a3e1f7e8eb96e535f403605ff453c450640747fff99e57f96c83,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763773341462552692,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-061914,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93f7c21784426a83b5675a8bfdf54ea9,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc
63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdbf40631ae6ee29590e8d68aac13524feef82376f9b6c68b4ad70e0faf11092,PodSandboxId:e7901305042d29cd0cda08723a3510725671fc0bc3dfc2f931f80e9c263da738,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763773341488757370,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-061914,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: bfa5a32e323ec128ba14285053651ddb,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d82ba4ffde215f3077764b701bb2f9ea0cc3c8a0734ee899c9a0439a0dc3715,PodSandboxId:bc51135297fd2275548a42454bf5116a3c9d2dee44853a18ee6f1d826f9d0ea6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763773341428791080,Labels:map[string]string{io.kubernetes.container.name: kube-controll
er-manager,io.kubernetes.pod.name: kube-controller-manager-pause-061914,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66ee994b4ed9be1476819906a9f48b1d,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a550f3f823b5a064de963807e648aed87bf5cb32570a8c0b2d0702b5911df395,PodSandboxId:d3a0ac5a7c5ea0f3dd10b79cdcbe691ea5d727b8cc4ca449fc0ad8b61597f81d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:
CONTAINER_RUNNING,CreatedAt:1763773324550138797,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gx5vg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 123a6b26-f1a1-48ac-b223-982b6d5f9e54,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1373884eab0e3bd35fa79223a219dcf37f6e7fdd4d9872a0e1a5a8ea5d18305e,PodSandboxId:4018e917e01b1
0010530127ed5f5d4a75b3a1fca76897179d893d81d84c4dc53,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763773323574127755,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qcq2l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44ef1014-30ea-46b7-96b4-d85437bfbafd,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:161be095ba10d1572ff29bf3c719de6b58e32850e4287c96579d96e406e85caf,PodSandboxId:e7901305042d29cd0cda08723a3510725671fc0bc3dfc2
f931f80e9c263da738,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1763773323455515460,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-061914,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfa5a32e323ec128ba14285053651ddb,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a64d5f577f4a5692d8ea
033bff031ad1e3f4af70745d1a49998e0c78a5c8d60e,PodSandboxId:2ddab47f3440a3e1f7e8eb96e535f403605ff453c450640747fff99e57f96c83,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1763773323363574074,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-061914,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93f7c21784426a83b5675a8bfdf54ea9,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationM
essagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8781303c59c3d8f47b1c02fafb2fd09f80e5a65d57020fffa3037fea8ee2336d,PodSandboxId:bc51135297fd2275548a42454bf5116a3c9d2dee44853a18ee6f1d826f9d0ea6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1763773323306387903,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-061914,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66ee994b4ed9be1476819906a9f48b1d,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.
container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b6f8a854e8133f151047122c37b7742c38c6d4c8964523664d83b89840d9ebc,PodSandboxId:c9e74ef58eb5d1e3be4b1b88b7e50c6ca49d09ad057747f8cc719e2be826e03e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1763773323287988951,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-061914,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a48c64319cf60fa0b845fa27ccb7a9c2,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"host
Port\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e45f9f3a0e7e09a7dfda889ebe3adc52b152c6320cab9b7e828a9b307f3c45d,PodSandboxId:f70a6bc8504b0d9fec1533ec9bc1ed2ba6dacff7ece9e8e39eab15c45b4d51fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1763773257854522960,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gx5vg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 123a6b26-f1a1-48ac-b223-982b6d5f9e54,},Annotations:map[string]string{io.kubernetes
.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:944822276ab7ec36d828a05439a1fc8a89fa62323892f288fcfb237a9faaad90,PodSandboxId:f6e3cc13fe333a06089ca01c765780f560faaa9b171a47f2f78ed62465baae4e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fb
a68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1763773257103873664,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qcq2l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44ef1014-30ea-46b7-96b4-d85437bfbafd,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0c3fe843-8980-4e6b-8c33-a100474c4df1 name=/runtime.v1.RuntimeService/ListContainers
	Nov 22 01:02:44 pause-061914 crio[2805]: time="2025-11-22 01:02:44.189158945Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=5a7727a0-7770-49df-8e78-20953bd2d047 name=/runtime.v1.RuntimeService/ListPodSandbox
	Nov 22 01:02:44 pause-061914 crio[2805]: time="2025-11-22 01:02:44.189567782Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:d3a0ac5a7c5ea0f3dd10b79cdcbe691ea5d727b8cc4ca449fc0ad8b61597f81d,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-gx5vg,Uid:123a6b26-f1a1-48ac-b223-982b6d5f9e54,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1763773323160605291,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-gx5vg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 123a6b26-f1a1-48ac-b223-982b6d5f9e54,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-22T01:00:56.584347477Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e7901305042d29cd0cda08723a3510725671fc0bc3dfc2f931f80e9c263da738,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-061914,Uid:bfa5a32e323ec128ba14285053651ddb,Namespace:kube-system,
Attempt:1,},State:SANDBOX_READY,CreatedAt:1763773322919367501,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-061914,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfa5a32e323ec128ba14285053651ddb,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: bfa5a32e323ec128ba14285053651ddb,kubernetes.io/config.seen: 2025-11-22T01:00:50.190066503Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2ddab47f3440a3e1f7e8eb96e535f403605ff453c450640747fff99e57f96c83,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-061914,Uid:93f7c21784426a83b5675a8bfdf54ea9,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1763773322875281703,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-061914,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93f7c21784426a83b5675a8bfdf54ea9,tier: c
ontrol-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.109:8443,kubernetes.io/config.hash: 93f7c21784426a83b5675a8bfdf54ea9,kubernetes.io/config.seen: 2025-11-22T01:00:50.190071232Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4018e917e01b10010530127ed5f5d4a75b3a1fca76897179d893d81d84c4dc53,Metadata:&PodSandboxMetadata{Name:kube-proxy-qcq2l,Uid:44ef1014-30ea-46b7-96b4-d85437bfbafd,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1763773322870434596,Labels:map[string]string{controller-revision-hash: 66486579fc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-qcq2l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44ef1014-30ea-46b7-96b4-d85437bfbafd,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-22T01:00:56.476901228Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c9e74ef58eb5d1e3be4b1b8
8b7e50c6ca49d09ad057747f8cc719e2be826e03e,Metadata:&PodSandboxMetadata{Name:etcd-pause-061914,Uid:a48c64319cf60fa0b845fa27ccb7a9c2,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1763773322779579420,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-061914,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a48c64319cf60fa0b845fa27ccb7a9c2,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.109:2379,kubernetes.io/config.hash: a48c64319cf60fa0b845fa27ccb7a9c2,kubernetes.io/config.seen: 2025-11-22T01:00:50.190070179Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:bc51135297fd2275548a42454bf5116a3c9d2dee44853a18ee6f1d826f9d0ea6,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-061914,Uid:66ee994b4ed9be1476819906a9f48b1d,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1763773322758955159,Labels:map[string]str
ing{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-061914,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66ee994b4ed9be1476819906a9f48b1d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 66ee994b4ed9be1476819906a9f48b1d,kubernetes.io/config.seen: 2025-11-22T01:00:50.190071982Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f70a6bc8504b0d9fec1533ec9bc1ed2ba6dacff7ece9e8e39eab15c45b4d51fa,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-gx5vg,Uid:123a6b26-f1a1-48ac-b223-982b6d5f9e54,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1763773256925560141,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-gx5vg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 123a6b26-f1a1-48ac-b223-982b6d5f9e54,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io
/config.seen: 2025-11-22T01:00:56.584347477Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f6e3cc13fe333a06089ca01c765780f560faaa9b171a47f2f78ed62465baae4e,Metadata:&PodSandboxMetadata{Name:kube-proxy-qcq2l,Uid:44ef1014-30ea-46b7-96b4-d85437bfbafd,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1763773256804035234,Labels:map[string]string{controller-revision-hash: 66486579fc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-qcq2l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44ef1014-30ea-46b7-96b4-d85437bfbafd,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-22T01:00:56.476901228Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=5a7727a0-7770-49df-8e78-20953bd2d047 name=/runtime.v1.RuntimeService/ListPodSandbox
	Nov 22 01:02:44 pause-061914 crio[2805]: time="2025-11-22 01:02:44.190582339Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c9f56928-c794-4299-bbe4-48c03d150d1c name=/runtime.v1.RuntimeService/ListContainers
	Nov 22 01:02:44 pause-061914 crio[2805]: time="2025-11-22 01:02:44.190688251Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c9f56928-c794-4299-bbe4-48c03d150d1c name=/runtime.v1.RuntimeService/ListContainers
	Nov 22 01:02:44 pause-061914 crio[2805]: time="2025-11-22 01:02:44.191097657Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:753f6b5e3bff1a7b3f48a54b1e7194d2cfe1015b67c0d3d6ae5585dac60de4f9,PodSandboxId:c9e74ef58eb5d1e3be4b1b88b7e50c6ca49d09ad057747f8cc719e2be826e03e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763773341542902675,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-061914,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a48c64319cf60fa0b845fa27ccb7a9c2,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,
\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33dc398a1236a471359e5fb8db17cdae11a8fcf9c3920e4778c6d680e3899a21,PodSandboxId:2ddab47f3440a3e1f7e8eb96e535f403605ff453c450640747fff99e57f96c83,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763773341462552692,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-061914,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93f7c21784426a83b5675a8bfdf54ea9,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc
63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdbf40631ae6ee29590e8d68aac13524feef82376f9b6c68b4ad70e0faf11092,PodSandboxId:e7901305042d29cd0cda08723a3510725671fc0bc3dfc2f931f80e9c263da738,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763773341488757370,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-061914,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: bfa5a32e323ec128ba14285053651ddb,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d82ba4ffde215f3077764b701bb2f9ea0cc3c8a0734ee899c9a0439a0dc3715,PodSandboxId:bc51135297fd2275548a42454bf5116a3c9d2dee44853a18ee6f1d826f9d0ea6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763773341428791080,Labels:map[string]string{io.kubernetes.container.name: kube-controll
er-manager,io.kubernetes.pod.name: kube-controller-manager-pause-061914,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66ee994b4ed9be1476819906a9f48b1d,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a550f3f823b5a064de963807e648aed87bf5cb32570a8c0b2d0702b5911df395,PodSandboxId:d3a0ac5a7c5ea0f3dd10b79cdcbe691ea5d727b8cc4ca449fc0ad8b61597f81d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:
CONTAINER_RUNNING,CreatedAt:1763773324550138797,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gx5vg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 123a6b26-f1a1-48ac-b223-982b6d5f9e54,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1373884eab0e3bd35fa79223a219dcf37f6e7fdd4d9872a0e1a5a8ea5d18305e,PodSandboxId:4018e917e01b1
0010530127ed5f5d4a75b3a1fca76897179d893d81d84c4dc53,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763773323574127755,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qcq2l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44ef1014-30ea-46b7-96b4-d85437bfbafd,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:161be095ba10d1572ff29bf3c719de6b58e32850e4287c96579d96e406e85caf,PodSandboxId:e7901305042d29cd0cda08723a3510725671fc0bc3dfc2
f931f80e9c263da738,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1763773323455515460,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-061914,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfa5a32e323ec128ba14285053651ddb,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a64d5f577f4a5692d8ea
033bff031ad1e3f4af70745d1a49998e0c78a5c8d60e,PodSandboxId:2ddab47f3440a3e1f7e8eb96e535f403605ff453c450640747fff99e57f96c83,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1763773323363574074,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-061914,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93f7c21784426a83b5675a8bfdf54ea9,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationM
essagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8781303c59c3d8f47b1c02fafb2fd09f80e5a65d57020fffa3037fea8ee2336d,PodSandboxId:bc51135297fd2275548a42454bf5116a3c9d2dee44853a18ee6f1d826f9d0ea6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1763773323306387903,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-061914,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66ee994b4ed9be1476819906a9f48b1d,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.
container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b6f8a854e8133f151047122c37b7742c38c6d4c8964523664d83b89840d9ebc,PodSandboxId:c9e74ef58eb5d1e3be4b1b88b7e50c6ca49d09ad057747f8cc719e2be826e03e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1763773323287988951,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-061914,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a48c64319cf60fa0b845fa27ccb7a9c2,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"host
Port\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e45f9f3a0e7e09a7dfda889ebe3adc52b152c6320cab9b7e828a9b307f3c45d,PodSandboxId:f70a6bc8504b0d9fec1533ec9bc1ed2ba6dacff7ece9e8e39eab15c45b4d51fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1763773257854522960,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gx5vg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 123a6b26-f1a1-48ac-b223-982b6d5f9e54,},Annotations:map[string]string{io.kubernetes
.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:944822276ab7ec36d828a05439a1fc8a89fa62323892f288fcfb237a9faaad90,PodSandboxId:f6e3cc13fe333a06089ca01c765780f560faaa9b171a47f2f78ed62465baae4e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fb
a68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1763773257103873664,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qcq2l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44ef1014-30ea-46b7-96b4-d85437bfbafd,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c9f56928-c794-4299-bbe4-48c03d150d1c name=/runtime.v1.RuntimeService/ListContainers
	Nov 22 01:02:44 pause-061914 crio[2805]: time="2025-11-22 01:02:44.208743949Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bd48043c-d55b-4699-9260-9f1887fad936 name=/runtime.v1.RuntimeService/Version
	Nov 22 01:02:44 pause-061914 crio[2805]: time="2025-11-22 01:02:44.208945864Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bd48043c-d55b-4699-9260-9f1887fad936 name=/runtime.v1.RuntimeService/Version
	Nov 22 01:02:44 pause-061914 crio[2805]: time="2025-11-22 01:02:44.212610179Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e9caf5e0-f461-4885-856f-ec6214343992 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 22 01:02:44 pause-061914 crio[2805]: time="2025-11-22 01:02:44.213828062Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763773364213790494,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e9caf5e0-f461-4885-856f-ec6214343992 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 22 01:02:44 pause-061914 crio[2805]: time="2025-11-22 01:02:44.215528359Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ca50f185-0a86-437a-95d6-2db469d6720a name=/runtime.v1.RuntimeService/ListContainers
	Nov 22 01:02:44 pause-061914 crio[2805]: time="2025-11-22 01:02:44.215700955Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ca50f185-0a86-437a-95d6-2db469d6720a name=/runtime.v1.RuntimeService/ListContainers
	Nov 22 01:02:44 pause-061914 crio[2805]: time="2025-11-22 01:02:44.216413530Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:753f6b5e3bff1a7b3f48a54b1e7194d2cfe1015b67c0d3d6ae5585dac60de4f9,PodSandboxId:c9e74ef58eb5d1e3be4b1b88b7e50c6ca49d09ad057747f8cc719e2be826e03e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763773341542902675,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-061914,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a48c64319cf60fa0b845fa27ccb7a9c2,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,
\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33dc398a1236a471359e5fb8db17cdae11a8fcf9c3920e4778c6d680e3899a21,PodSandboxId:2ddab47f3440a3e1f7e8eb96e535f403605ff453c450640747fff99e57f96c83,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763773341462552692,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-061914,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93f7c21784426a83b5675a8bfdf54ea9,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc
63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdbf40631ae6ee29590e8d68aac13524feef82376f9b6c68b4ad70e0faf11092,PodSandboxId:e7901305042d29cd0cda08723a3510725671fc0bc3dfc2f931f80e9c263da738,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763773341488757370,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-061914,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: bfa5a32e323ec128ba14285053651ddb,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d82ba4ffde215f3077764b701bb2f9ea0cc3c8a0734ee899c9a0439a0dc3715,PodSandboxId:bc51135297fd2275548a42454bf5116a3c9d2dee44853a18ee6f1d826f9d0ea6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763773341428791080,Labels:map[string]string{io.kubernetes.container.name: kube-controll
er-manager,io.kubernetes.pod.name: kube-controller-manager-pause-061914,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66ee994b4ed9be1476819906a9f48b1d,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a550f3f823b5a064de963807e648aed87bf5cb32570a8c0b2d0702b5911df395,PodSandboxId:d3a0ac5a7c5ea0f3dd10b79cdcbe691ea5d727b8cc4ca449fc0ad8b61597f81d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:
CONTAINER_RUNNING,CreatedAt:1763773324550138797,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gx5vg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 123a6b26-f1a1-48ac-b223-982b6d5f9e54,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1373884eab0e3bd35fa79223a219dcf37f6e7fdd4d9872a0e1a5a8ea5d18305e,PodSandboxId:4018e917e01b1
0010530127ed5f5d4a75b3a1fca76897179d893d81d84c4dc53,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763773323574127755,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qcq2l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44ef1014-30ea-46b7-96b4-d85437bfbafd,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:161be095ba10d1572ff29bf3c719de6b58e32850e4287c96579d96e406e85caf,PodSandboxId:e7901305042d29cd0cda08723a3510725671fc0bc3dfc2
f931f80e9c263da738,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1763773323455515460,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-061914,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfa5a32e323ec128ba14285053651ddb,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a64d5f577f4a5692d8ea
033bff031ad1e3f4af70745d1a49998e0c78a5c8d60e,PodSandboxId:2ddab47f3440a3e1f7e8eb96e535f403605ff453c450640747fff99e57f96c83,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1763773323363574074,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-061914,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93f7c21784426a83b5675a8bfdf54ea9,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationM
essagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8781303c59c3d8f47b1c02fafb2fd09f80e5a65d57020fffa3037fea8ee2336d,PodSandboxId:bc51135297fd2275548a42454bf5116a3c9d2dee44853a18ee6f1d826f9d0ea6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1763773323306387903,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-061914,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66ee994b4ed9be1476819906a9f48b1d,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.
container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b6f8a854e8133f151047122c37b7742c38c6d4c8964523664d83b89840d9ebc,PodSandboxId:c9e74ef58eb5d1e3be4b1b88b7e50c6ca49d09ad057747f8cc719e2be826e03e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1763773323287988951,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-061914,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a48c64319cf60fa0b845fa27ccb7a9c2,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"host
Port\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e45f9f3a0e7e09a7dfda889ebe3adc52b152c6320cab9b7e828a9b307f3c45d,PodSandboxId:f70a6bc8504b0d9fec1533ec9bc1ed2ba6dacff7ece9e8e39eab15c45b4d51fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1763773257854522960,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gx5vg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 123a6b26-f1a1-48ac-b223-982b6d5f9e54,},Annotations:map[string]string{io.kubernetes
.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:944822276ab7ec36d828a05439a1fc8a89fa62323892f288fcfb237a9faaad90,PodSandboxId:f6e3cc13fe333a06089ca01c765780f560faaa9b171a47f2f78ed62465baae4e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fb
a68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1763773257103873664,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qcq2l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44ef1014-30ea-46b7-96b4-d85437bfbafd,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ca50f185-0a86-437a-95d6-2db469d6720a name=/runtime.v1.RuntimeService/ListContainers
	Nov 22 01:02:44 pause-061914 crio[2805]: time="2025-11-22 01:02:44.272285619Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3ca23c91-2dee-4958-9b8c-dd25ba0758ad name=/runtime.v1.RuntimeService/Version
	Nov 22 01:02:44 pause-061914 crio[2805]: time="2025-11-22 01:02:44.272376394Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3ca23c91-2dee-4958-9b8c-dd25ba0758ad name=/runtime.v1.RuntimeService/Version
	Nov 22 01:02:44 pause-061914 crio[2805]: time="2025-11-22 01:02:44.273733161Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c921035f-daf8-4e8d-a16d-182a6c98a4b0 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 22 01:02:44 pause-061914 crio[2805]: time="2025-11-22 01:02:44.274111831Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763773364274089360,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c921035f-daf8-4e8d-a16d-182a6c98a4b0 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 22 01:02:44 pause-061914 crio[2805]: time="2025-11-22 01:02:44.274852502Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0ee39cc9-33cc-483f-bda3-d97201085144 name=/runtime.v1.RuntimeService/ListContainers
	Nov 22 01:02:44 pause-061914 crio[2805]: time="2025-11-22 01:02:44.274924513Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0ee39cc9-33cc-483f-bda3-d97201085144 name=/runtime.v1.RuntimeService/ListContainers
	Nov 22 01:02:44 pause-061914 crio[2805]: time="2025-11-22 01:02:44.275152861Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:753f6b5e3bff1a7b3f48a54b1e7194d2cfe1015b67c0d3d6ae5585dac60de4f9,PodSandboxId:c9e74ef58eb5d1e3be4b1b88b7e50c6ca49d09ad057747f8cc719e2be826e03e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763773341542902675,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-061914,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a48c64319cf60fa0b845fa27ccb7a9c2,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,
\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33dc398a1236a471359e5fb8db17cdae11a8fcf9c3920e4778c6d680e3899a21,PodSandboxId:2ddab47f3440a3e1f7e8eb96e535f403605ff453c450640747fff99e57f96c83,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763773341462552692,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-061914,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93f7c21784426a83b5675a8bfdf54ea9,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc
63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdbf40631ae6ee29590e8d68aac13524feef82376f9b6c68b4ad70e0faf11092,PodSandboxId:e7901305042d29cd0cda08723a3510725671fc0bc3dfc2f931f80e9c263da738,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763773341488757370,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-061914,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: bfa5a32e323ec128ba14285053651ddb,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d82ba4ffde215f3077764b701bb2f9ea0cc3c8a0734ee899c9a0439a0dc3715,PodSandboxId:bc51135297fd2275548a42454bf5116a3c9d2dee44853a18ee6f1d826f9d0ea6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763773341428791080,Labels:map[string]string{io.kubernetes.container.name: kube-controll
er-manager,io.kubernetes.pod.name: kube-controller-manager-pause-061914,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66ee994b4ed9be1476819906a9f48b1d,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a550f3f823b5a064de963807e648aed87bf5cb32570a8c0b2d0702b5911df395,PodSandboxId:d3a0ac5a7c5ea0f3dd10b79cdcbe691ea5d727b8cc4ca449fc0ad8b61597f81d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:
CONTAINER_RUNNING,CreatedAt:1763773324550138797,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gx5vg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 123a6b26-f1a1-48ac-b223-982b6d5f9e54,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1373884eab0e3bd35fa79223a219dcf37f6e7fdd4d9872a0e1a5a8ea5d18305e,PodSandboxId:4018e917e01b1
0010530127ed5f5d4a75b3a1fca76897179d893d81d84c4dc53,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763773323574127755,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qcq2l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44ef1014-30ea-46b7-96b4-d85437bfbafd,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:161be095ba10d1572ff29bf3c719de6b58e32850e4287c96579d96e406e85caf,PodSandboxId:e7901305042d29cd0cda08723a3510725671fc0bc3dfc2
f931f80e9c263da738,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1763773323455515460,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-061914,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfa5a32e323ec128ba14285053651ddb,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a64d5f577f4a5692d8ea
033bff031ad1e3f4af70745d1a49998e0c78a5c8d60e,PodSandboxId:2ddab47f3440a3e1f7e8eb96e535f403605ff453c450640747fff99e57f96c83,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1763773323363574074,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-061914,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93f7c21784426a83b5675a8bfdf54ea9,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationM
essagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8781303c59c3d8f47b1c02fafb2fd09f80e5a65d57020fffa3037fea8ee2336d,PodSandboxId:bc51135297fd2275548a42454bf5116a3c9d2dee44853a18ee6f1d826f9d0ea6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1763773323306387903,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-061914,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66ee994b4ed9be1476819906a9f48b1d,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.
container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b6f8a854e8133f151047122c37b7742c38c6d4c8964523664d83b89840d9ebc,PodSandboxId:c9e74ef58eb5d1e3be4b1b88b7e50c6ca49d09ad057747f8cc719e2be826e03e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1763773323287988951,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-061914,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a48c64319cf60fa0b845fa27ccb7a9c2,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"host
Port\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e45f9f3a0e7e09a7dfda889ebe3adc52b152c6320cab9b7e828a9b307f3c45d,PodSandboxId:f70a6bc8504b0d9fec1533ec9bc1ed2ba6dacff7ece9e8e39eab15c45b4d51fa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1763773257854522960,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gx5vg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 123a6b26-f1a1-48ac-b223-982b6d5f9e54,},Annotations:map[string]string{io.kubernetes
.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:944822276ab7ec36d828a05439a1fc8a89fa62323892f288fcfb237a9faaad90,PodSandboxId:f6e3cc13fe333a06089ca01c765780f560faaa9b171a47f2f78ed62465baae4e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fb
a68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1763773257103873664,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qcq2l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44ef1014-30ea-46b7-96b4-d85437bfbafd,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0ee39cc9-33cc-483f-bda3-d97201085144 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	753f6b5e3bff1       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   22 seconds ago       Running             etcd                      2                   c9e74ef58eb5d       etcd-pause-061914                      kube-system
	bdbf40631ae6e       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   22 seconds ago       Running             kube-scheduler            2                   e7901305042d2       kube-scheduler-pause-061914            kube-system
	33dc398a1236a       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   22 seconds ago       Running             kube-apiserver            2                   2ddab47f3440a       kube-apiserver-pause-061914            kube-system
	3d82ba4ffde21       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   22 seconds ago       Running             kube-controller-manager   2                   bc51135297fd2       kube-controller-manager-pause-061914   kube-system
	a550f3f823b5a       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   39 seconds ago       Running             coredns                   1                   d3a0ac5a7c5ea       coredns-66bc5c9577-gx5vg               kube-system
	1373884eab0e3       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   40 seconds ago       Running             kube-proxy                1                   4018e917e01b1       kube-proxy-qcq2l                       kube-system
	161be095ba10d       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   40 seconds ago       Exited              kube-scheduler            1                   e7901305042d2       kube-scheduler-pause-061914            kube-system
	a64d5f577f4a5       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   41 seconds ago       Exited              kube-apiserver            1                   2ddab47f3440a       kube-apiserver-pause-061914            kube-system
	8781303c59c3d       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   41 seconds ago       Exited              kube-controller-manager   1                   bc51135297fd2       kube-controller-manager-pause-061914   kube-system
	2b6f8a854e813       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   41 seconds ago       Exited              etcd                      1                   c9e74ef58eb5d       etcd-pause-061914                      kube-system
	5e45f9f3a0e7e       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   About a minute ago   Exited              coredns                   0                   f70a6bc8504b0       coredns-66bc5c9577-gx5vg               kube-system
	944822276ab7e       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   About a minute ago   Exited              kube-proxy                0                   f6e3cc13fe333       kube-proxy-qcq2l                       kube-system
	
	
	==> coredns [5e45f9f3a0e7e09a7dfda889ebe3adc52b152c6320cab9b7e828a9b307f3c45d] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	[INFO] Reloading complete
	[INFO] 127.0.0.1:43746 - 65449 "HINFO IN 6291658636460937464.7795402595793742382. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.036948428s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a550f3f823b5a064de963807e648aed87bf5cb32570a8c0b2d0702b5911df395] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48191 - 61130 "HINFO IN 2176741116805888282.3183143147075464491. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.041483815s
	
	
	==> describe nodes <==
	Name:               pause-061914
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-061914
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785
	                    minikube.k8s.io/name=pause-061914
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_22T01_00_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 22 Nov 2025 01:00:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-061914
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 22 Nov 2025 01:02:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 22 Nov 2025 01:02:26 +0000   Sat, 22 Nov 2025 01:00:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 22 Nov 2025 01:02:26 +0000   Sat, 22 Nov 2025 01:00:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 22 Nov 2025 01:02:26 +0000   Sat, 22 Nov 2025 01:00:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 22 Nov 2025 01:02:26 +0000   Sat, 22 Nov 2025 01:00:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.109
	  Hostname:    pause-061914
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035908Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035908Ki
	  pods:               110
	System Info:
	  Machine ID:                 b5e7f28219524e89a093aff97bf9f591
	  System UUID:                b5e7f282-1952-4e89-a093-aff97bf9f591
	  Boot ID:                    b738a322-0c63-4903-9f81-d29ebb7c9cd4
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-gx5vg                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     108s
	  kube-system                 etcd-pause-061914                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         114s
	  kube-system                 kube-apiserver-pause-061914             250m (12%)    0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-controller-manager-pause-061914    200m (10%)    0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-proxy-qcq2l                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-pause-061914             100m (5%)     0 (0%)      0 (0%)           0 (0%)         114s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 106s               kube-proxy       
	  Normal  Starting                 36s                kube-proxy       
	  Normal  NodeAllocatableEnforced  114s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  114s               kubelet          Node pause-061914 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    114s               kubelet          Node pause-061914 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     114s               kubelet          Node pause-061914 status is now: NodeHasSufficientPID
	  Normal  Starting                 114s               kubelet          Starting kubelet.
	  Normal  NodeReady                113s               kubelet          Node pause-061914 status is now: NodeReady
	  Normal  RegisteredNode           109s               node-controller  Node pause-061914 event: Registered Node pause-061914 in Controller
	  Normal  Starting                 24s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  24s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  23s (x8 over 24s)  kubelet          Node pause-061914 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x8 over 24s)  kubelet          Node pause-061914 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x7 over 24s)  kubelet          Node pause-061914 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14s                node-controller  Node pause-061914 event: Registered Node pause-061914 in Controller
	
	
	==> dmesg <==
	[Nov22 01:00] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001501] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.007212] (rpcbind)[120]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.173031] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000016] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.093898] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.105963] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.098196] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.176542] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.045018] kauditd_printk_skb: 18 callbacks suppressed
	[Nov22 01:01] kauditd_printk_skb: 222 callbacks suppressed
	[  +0.124216] kauditd_printk_skb: 38 callbacks suppressed
	[Nov22 01:02] kauditd_printk_skb: 319 callbacks suppressed
	[  +0.024805] kauditd_printk_skb: 79 callbacks suppressed
	
	
	==> etcd [2b6f8a854e8133f151047122c37b7742c38c6d4c8964523664d83b89840d9ebc] <==
	{"level":"warn","ts":"2025-11-22T01:02:06.875839Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T01:02:06.885457Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T01:02:06.898278Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T01:02:06.906693Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T01:02:06.917269Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T01:02:06.927541Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T01:02:07.005463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36434","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-22T01:02:17.938234Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-22T01:02:17.938340Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-061914","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.109:2380"],"advertise-client-urls":["https://192.168.50.109:2379"]}
	{"level":"error","ts":"2025-11-22T01:02:17.938434Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-22T01:02:17.940134Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-22T01:02:17.940246Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-22T01:02:17.940276Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"46a65bd61cd538c0","current-leader-member-id":"46a65bd61cd538c0"}
	{"level":"info","ts":"2025-11-22T01:02:17.940364Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-22T01:02:17.940394Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-11-22T01:02:17.940662Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.109:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-22T01:02:17.940728Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.109:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-22T01:02:17.940741Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.50.109:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-22T01:02:17.940821Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-22T01:02:17.940850Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-22T01:02:17.940861Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-22T01:02:17.944076Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.50.109:2380"}
	{"level":"error","ts":"2025-11-22T01:02:17.944145Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.50.109:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-22T01:02:17.944225Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.50.109:2380"}
	{"level":"info","ts":"2025-11-22T01:02:17.944234Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-061914","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.109:2380"],"advertise-client-urls":["https://192.168.50.109:2379"]}
	
	
	==> etcd [753f6b5e3bff1a7b3f48a54b1e7194d2cfe1015b67c0d3d6ae5585dac60de4f9] <==
	{"level":"info","ts":"2025-11-22T01:02:27.545671Z","caller":"traceutil/trace.go:172","msg":"trace[856244278] transaction","detail":"{read_only:false; number_of_response:0; response_revision:423; }","duration":"181.608392ms","start":"2025-11-22T01:02:27.364047Z","end":"2025-11-22T01:02:27.545656Z","steps":["trace[856244278] 'process raft request'  (duration: 178.524578ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-22T01:02:27.545744Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"181.323946ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/system:public-info-viewer\" limit:1 ","response":"range_response_count:1 size:783"}
	{"level":"info","ts":"2025-11-22T01:02:27.545774Z","caller":"traceutil/trace.go:172","msg":"trace[90210994] range","detail":"{range_begin:/registry/clusterrolebindings/system:public-info-viewer; range_end:; response_count:1; response_revision:423; }","duration":"181.368409ms","start":"2025-11-22T01:02:27.364397Z","end":"2025-11-22T01:02:27.545765Z","steps":["trace[90210994] 'agreement among raft nodes before linearized reading'  (duration: 178.012423ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-22T01:02:27.560223Z","caller":"traceutil/trace.go:172","msg":"trace[1254703358] transaction","detail":"{read_only:false; response_revision:424; number_of_response:1; }","duration":"188.300744ms","start":"2025-11-22T01:02:27.371849Z","end":"2025-11-22T01:02:27.560150Z","steps":["trace[1254703358] 'process raft request'  (duration: 188.151332ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-22T01:02:29.886937Z","caller":"traceutil/trace.go:172","msg":"trace[431212940] linearizableReadLoop","detail":"{readStateIndex:480; appliedIndex:480; }","duration":"116.672365ms","start":"2025-11-22T01:02:29.770240Z","end":"2025-11-22T01:02:29.886912Z","steps":["trace[431212940] 'read index received'  (duration: 116.666204ms)","trace[431212940] 'applied index is now lower than readState.Index'  (duration: 5.444µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-22T01:02:29.951683Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"181.53377ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/resourcequota-controller\" limit:1 ","response":"range_response_count:1 size:214"}
	{"level":"info","ts":"2025-11-22T01:02:29.951771Z","caller":"traceutil/trace.go:172","msg":"trace[1158590305] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/resourcequota-controller; range_end:; response_count:1; response_revision:434; }","duration":"181.632971ms","start":"2025-11-22T01:02:29.770121Z","end":"2025-11-22T01:02:29.951754Z","steps":["trace[1158590305] 'agreement among raft nodes before linearized reading'  (duration: 116.936355ms)","trace[1158590305] 'range keys from in-memory index tree'  (duration: 64.449973ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-22T01:02:29.951980Z","caller":"traceutil/trace.go:172","msg":"trace[1796271052] transaction","detail":"{read_only:false; response_revision:435; number_of_response:1; }","duration":"185.511693ms","start":"2025-11-22T01:02:29.766452Z","end":"2025-11-22T01:02:29.951963Z","steps":["trace[1796271052] 'process raft request'  (duration: 120.650875ms)","trace[1796271052] 'compare'  (duration: 64.677433ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-22T01:02:30.311701Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"190.18013ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/deployment-controller\" limit:1 ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2025-11-22T01:02:30.311849Z","caller":"traceutil/trace.go:172","msg":"trace[96668197] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/deployment-controller; range_end:; response_count:1; response_revision:438; }","duration":"190.338368ms","start":"2025-11-22T01:02:30.121500Z","end":"2025-11-22T01:02:30.311838Z","steps":["trace[96668197] 'agreement among raft nodes before linearized reading'  (duration: 122.423585ms)","trace[96668197] 'range keys from in-memory index tree'  (duration: 67.646165ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-22T01:02:30.312032Z","caller":"traceutil/trace.go:172","msg":"trace[2019948781] transaction","detail":"{read_only:false; response_revision:439; number_of_response:1; }","duration":"156.477135ms","start":"2025-11-22T01:02:30.155539Z","end":"2025-11-22T01:02:30.312017Z","steps":["trace[2019948781] 'process raft request'  (duration: 88.457998ms)","trace[2019948781] 'compare'  (duration: 67.550388ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-22T01:02:30.311700Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"192.118011ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/endpoint-controller\" limit:1 ","response":"range_response_count:1 size:203"}
	{"level":"info","ts":"2025-11-22T01:02:30.312504Z","caller":"traceutil/trace.go:172","msg":"trace[1462237125] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/endpoint-controller; range_end:; response_count:1; response_revision:438; }","duration":"192.93118ms","start":"2025-11-22T01:02:30.119562Z","end":"2025-11-22T01:02:30.312494Z","steps":["trace[1462237125] 'agreement among raft nodes before linearized reading'  (duration: 124.380973ms)","trace[1462237125] 'range keys from in-memory index tree'  (duration: 67.668045ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-22T01:02:30.312545Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"114.866666ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/pause-061914\" limit:1 ","response":"range_response_count:1 size:706"}
	{"level":"info","ts":"2025-11-22T01:02:30.312578Z","caller":"traceutil/trace.go:172","msg":"trace[1269323999] range","detail":"{range_begin:/registry/csinodes/pause-061914; range_end:; response_count:1; response_revision:439; }","duration":"114.903559ms","start":"2025-11-22T01:02:30.197663Z","end":"2025-11-22T01:02:30.312567Z","steps":["trace[1269323999] 'agreement among raft nodes before linearized reading'  (duration: 114.801756ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-22T01:02:30.312789Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"115.278055ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-node-lease/pause-061914\" limit:1 ","response":"range_response_count:1 size:552"}
	{"level":"info","ts":"2025-11-22T01:02:30.312847Z","caller":"traceutil/trace.go:172","msg":"trace[874771084] range","detail":"{range_begin:/registry/leases/kube-node-lease/pause-061914; range_end:; response_count:1; response_revision:439; }","duration":"115.337071ms","start":"2025-11-22T01:02:30.197502Z","end":"2025-11-22T01:02:30.312839Z","steps":["trace[874771084] 'agreement among raft nodes before linearized reading'  (duration: 115.222551ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-22T01:02:30.313037Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"116.600767ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" limit:1 ","response":"range_response_count:1 size:4316"}
	{"level":"info","ts":"2025-11-22T01:02:30.313060Z","caller":"traceutil/trace.go:172","msg":"trace[666205182] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:439; }","duration":"116.624274ms","start":"2025-11-22T01:02:30.196428Z","end":"2025-11-22T01:02:30.313052Z","steps":["trace[666205182] 'agreement among raft nodes before linearized reading'  (duration: 116.5338ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-22T01:02:30.313145Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"140.221996ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/endpointslice-controller\" limit:1 ","response":"range_response_count:1 size:214"}
	{"level":"info","ts":"2025-11-22T01:02:30.313240Z","caller":"traceutil/trace.go:172","msg":"trace[220246634] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/endpointslice-controller; range_end:; response_count:1; response_revision:439; }","duration":"140.317389ms","start":"2025-11-22T01:02:30.172913Z","end":"2025-11-22T01:02:30.313230Z","steps":["trace[220246634] 'agreement among raft nodes before linearized reading'  (duration: 140.162818ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-22T01:02:30.314251Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"144.439567ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/kube-apiserver-legacy-service-account-token-tracking\" limit:1 ","response":"range_response_count:1 size:370"}
	{"level":"warn","ts":"2025-11-22T01:02:30.316431Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"147.182684ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/default/pause-061914.187a2e823b61b940\" limit:1 ","response":"range_response_count:1 size:679"}
	{"level":"info","ts":"2025-11-22T01:02:30.319315Z","caller":"traceutil/trace.go:172","msg":"trace[1464128208] range","detail":"{range_begin:/registry/events/default/pause-061914.187a2e823b61b940; range_end:; response_count:1; response_revision:439; }","duration":"150.066276ms","start":"2025-11-22T01:02:30.169229Z","end":"2025-11-22T01:02:30.319296Z","steps":["trace[1464128208] 'agreement among raft nodes before linearized reading'  (duration: 147.093964ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-22T01:02:30.316638Z","caller":"traceutil/trace.go:172","msg":"trace[88350750] range","detail":"{range_begin:/registry/configmaps/kube-system/kube-apiserver-legacy-service-account-token-tracking; range_end:; response_count:1; response_revision:439; }","duration":"145.970556ms","start":"2025-11-22T01:02:30.169530Z","end":"2025-11-22T01:02:30.315501Z","steps":["trace[88350750] 'agreement among raft nodes before linearized reading'  (duration: 144.197264ms)"],"step_count":1}
	
	
	==> kernel <==
	 01:02:44 up 2 min,  0 users,  load average: 1.20, 0.61, 0.24
	Linux pause-061914 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Nov 19 01:10:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [33dc398a1236a471359e5fb8db17cdae11a8fcf9c3920e4778c6d680e3899a21] <==
	I1122 01:02:25.585686       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1122 01:02:25.586318       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1122 01:02:25.590031       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1122 01:02:25.597446       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1122 01:02:25.597531       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1122 01:02:25.605873       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1122 01:02:25.606310       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1122 01:02:25.606444       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1122 01:02:25.630059       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 01:02:25.635033       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1122 01:02:25.643786       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1122 01:02:25.643840       1 policy_source.go:240] refreshing policies
	I1122 01:02:25.665300       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1122 01:02:25.667086       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1122 01:02:25.684823       1 cache.go:39] Caches are synced for autoregister controller
	I1122 01:02:25.684835       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1122 01:02:25.685299       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1122 01:02:26.357906       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1122 01:02:26.833045       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1122 01:02:28.117544       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1122 01:02:28.180016       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1122 01:02:28.218851       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1122 01:02:28.229396       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1122 01:02:30.338587       1 controller.go:667] quota admission added evaluator for: endpoints
	I1122 01:02:30.341122       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [a64d5f577f4a5692d8ea033bff031ad1e3f4af70745d1a49998e0c78a5c8d60e] <==
	I1122 01:02:07.872162       1 controller.go:176] quota evaluator worker shutdown
	I1122 01:02:07.872532       1 controller.go:176] quota evaluator worker shutdown
	I1122 01:02:07.872575       1 controller.go:176] quota evaluator worker shutdown
	I1122 01:02:07.872583       1 controller.go:176] quota evaluator worker shutdown
	I1122 01:02:07.872591       1 controller.go:176] quota evaluator worker shutdown
	E1122 01:02:08.589492       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W1122 01:02:08.589565       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	W1122 01:02:09.589349       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E1122 01:02:09.589437       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	E1122 01:02:10.589005       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W1122 01:02:10.589370       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E1122 01:02:11.589052       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W1122 01:02:11.589597       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E1122 01:02:12.589144       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W1122 01:02:12.589749       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E1122 01:02:13.589415       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W1122 01:02:13.590092       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E1122 01:02:14.589148       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W1122 01:02:14.589734       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	W1122 01:02:15.588973       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E1122 01:02:15.589010       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	E1122 01:02:16.588443       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W1122 01:02:16.589873       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	W1122 01:02:17.589091       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E1122 01:02:17.589237       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	
	
	==> kube-controller-manager [3d82ba4ffde215f3077764b701bb2f9ea0cc3c8a0734ee899c9a0439a0dc3715] <==
	I1122 01:02:30.152719       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-061914"
	I1122 01:02:30.152838       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1122 01:02:30.158069       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1122 01:02:30.158125       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1122 01:02:30.158208       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1122 01:02:30.158232       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1122 01:02:30.158238       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1122 01:02:30.158429       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1122 01:02:30.161747       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1122 01:02:30.165154       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1122 01:02:30.165160       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1122 01:02:30.166129       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1122 01:02:30.167283       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1122 01:02:30.168383       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1122 01:02:30.168471       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1122 01:02:30.171007       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1122 01:02:30.174690       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1122 01:02:30.175977       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1122 01:02:30.178270       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1122 01:02:30.178308       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1122 01:02:30.190226       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1122 01:02:30.193425       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1122 01:02:30.193449       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1122 01:02:30.193459       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1122 01:02:30.201841       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-controller-manager [8781303c59c3d8f47b1c02fafb2fd09f80e5a65d57020fffa3037fea8ee2336d] <==
	I1122 01:02:04.866500       1 serving.go:386] Generated self-signed cert in-memory
	I1122 01:02:06.116247       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1122 01:02:06.116303       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 01:02:06.121118       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1122 01:02:06.123915       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1122 01:02:06.124059       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1122 01:02:06.124154       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E1122 01:02:17.642985       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.50.109:8443/healthz\": dial tcp 192.168.50.109:8443: connect: connection refused"
	
	
	==> kube-proxy [1373884eab0e3bd35fa79223a219dcf37f6e7fdd4d9872a0e1a5a8ea5d18305e] <==
	I1122 01:02:07.919334       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1122 01:02:07.919394       1 config.go:309] "Starting node config controller"
	I1122 01:02:07.919397       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1122 01:02:07.919402       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1122 01:02:07.919664       1 config.go:106] "Starting endpoint slice config controller"
	I1122 01:02:07.919672       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1122 01:02:07.919685       1 config.go:403] "Starting serviceCIDR config controller"
	I1122 01:02:07.919688       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	E1122 01:02:07.919744       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.50.109:8443: connect: connection refused"
	E1122 01:02:07.919870       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.50.109:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1122 01:02:07.920128       1 reflector.go:205] "Failed to watch" err="failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.50.109:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1122 01:02:07.920159       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ServiceCIDR: Get \"https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1/servicecidrs?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.50.109:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1122 01:02:08.983057       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ServiceCIDR: Get \"https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1/servicecidrs?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.50.109:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1122 01:02:09.206427       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.50.109:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1122 01:02:09.381714       1 reflector.go:205] "Failed to watch" err="failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.50.109:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1122 01:02:11.181144       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.50.109:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1122 01:02:11.499510       1 reflector.go:205] "Failed to watch" err="failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.50.109:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1122 01:02:12.022102       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ServiceCIDR: Get \"https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1/servicecidrs?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.50.109:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1122 01:02:14.912787       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.50.109:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1122 01:02:15.534939       1 reflector.go:205] "Failed to watch" err="failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.50.109:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1122 01:02:15.717366       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ServiceCIDR: Get \"https://control-plane.minikube.internal:8443/apis/networking.k8s.io/v1/servicecidrs?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.50.109:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1122 01:02:18.560372       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.50.109:8443: connect: connection refused"
	I1122 01:02:25.620385       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1122 01:02:25.620453       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1122 01:02:28.320044       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [944822276ab7ec36d828a05439a1fc8a89fa62323892f288fcfb237a9faaad90] <==
	I1122 01:00:57.627942       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1122 01:00:57.735819       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1122 01:00:57.739436       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.50.109"]
	E1122 01:00:57.739516       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1122 01:00:57.873581       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1122 01:00:57.873788       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1122 01:00:57.873864       1 server_linux.go:132] "Using iptables Proxier"
	I1122 01:00:57.933908       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1122 01:00:57.935047       1 server.go:527] "Version info" version="v1.34.1"
	I1122 01:00:57.935158       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 01:00:57.945591       1 config.go:309] "Starting node config controller"
	I1122 01:00:57.945687       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1122 01:00:57.945711       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1122 01:00:57.946859       1 config.go:200] "Starting service config controller"
	I1122 01:00:57.946887       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1122 01:00:57.946911       1 config.go:106] "Starting endpoint slice config controller"
	I1122 01:00:57.946914       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1122 01:00:57.946927       1 config.go:403] "Starting serviceCIDR config controller"
	I1122 01:00:57.946958       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1122 01:00:58.047546       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1122 01:00:58.047579       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1122 01:00:58.047605       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [161be095ba10d1572ff29bf3c719de6b58e32850e4287c96579d96e406e85caf] <==
	I1122 01:02:05.502352       1 serving.go:386] Generated self-signed cert in-memory
	W1122 01:02:07.637297       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1122 01:02:07.637391       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1122 01:02:07.637414       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1122 01:02:07.637431       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1122 01:02:07.721454       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1122 01:02:07.721492       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E1122 01:02:07.721547       1 event.go:401] "Unable start event watcher (will not retry!)" err="broadcaster already stopped"
	I1122 01:02:07.727799       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 01:02:07.727871       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 01:02:07.729478       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	E1122 01:02:07.729552       1 server.go:286] "handlers are not fully synchronized" err="context canceled"
	E1122 01:02:07.729634       1 shared_informer.go:352] "Unable to sync caches" logger="UnhandledError" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 01:02:07.729662       1 configmap_cafile_content.go:213] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 01:02:07.729680       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1122 01:02:07.729688       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1122 01:02:07.729745       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1122 01:02:07.729779       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1122 01:02:07.729785       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1122 01:02:07.729814       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [bdbf40631ae6ee29590e8d68aac13524feef82376f9b6c68b4ad70e0faf11092] <==
	I1122 01:02:23.899266       1 serving.go:386] Generated self-signed cert in-memory
	W1122 01:02:25.545949       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1122 01:02:25.546317       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1122 01:02:25.546672       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1122 01:02:25.548274       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1122 01:02:25.611383       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1122 01:02:25.611430       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 01:02:25.617442       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 01:02:25.617591       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1122 01:02:25.617593       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 01:02:25.619008       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1122 01:02:25.718385       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 22 01:02:24 pause-061914 kubelet[3871]: E1122 01:02:24.070810    3871 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-061914\" not found" node="pause-061914"
	Nov 22 01:02:24 pause-061914 kubelet[3871]: E1122 01:02:24.071413    3871 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-061914\" not found" node="pause-061914"
	Nov 22 01:02:24 pause-061914 kubelet[3871]: E1122 01:02:24.072775    3871 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-061914\" not found" node="pause-061914"
	Nov 22 01:02:25 pause-061914 kubelet[3871]: E1122 01:02:25.071035    3871 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-061914\" not found" node="pause-061914"
	Nov 22 01:02:25 pause-061914 kubelet[3871]: E1122 01:02:25.071993    3871 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-061914\" not found" node="pause-061914"
	Nov 22 01:02:25 pause-061914 kubelet[3871]: I1122 01:02:25.538091    3871 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-061914"
	Nov 22 01:02:25 pause-061914 kubelet[3871]: I1122 01:02:25.716243    3871 apiserver.go:52] "Watching apiserver"
	Nov 22 01:02:25 pause-061914 kubelet[3871]: I1122 01:02:25.738482    3871 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 22 01:02:25 pause-061914 kubelet[3871]: I1122 01:02:25.759582    3871 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/44ef1014-30ea-46b7-96b4-d85437bfbafd-lib-modules\") pod \"kube-proxy-qcq2l\" (UID: \"44ef1014-30ea-46b7-96b4-d85437bfbafd\") " pod="kube-system/kube-proxy-qcq2l"
	Nov 22 01:02:25 pause-061914 kubelet[3871]: I1122 01:02:25.759648    3871 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/44ef1014-30ea-46b7-96b4-d85437bfbafd-xtables-lock\") pod \"kube-proxy-qcq2l\" (UID: \"44ef1014-30ea-46b7-96b4-d85437bfbafd\") " pod="kube-system/kube-proxy-qcq2l"
	Nov 22 01:02:26 pause-061914 kubelet[3871]: E1122 01:02:26.369621    3871 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-061914\" already exists" pod="kube-system/kube-apiserver-pause-061914"
	Nov 22 01:02:26 pause-061914 kubelet[3871]: I1122 01:02:26.370247    3871 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-061914"
	Nov 22 01:02:26 pause-061914 kubelet[3871]: I1122 01:02:26.379361    3871 kubelet_node_status.go:124] "Node was previously registered" node="pause-061914"
	Nov 22 01:02:26 pause-061914 kubelet[3871]: I1122 01:02:26.379429    3871 kubelet_node_status.go:78] "Successfully registered node" node="pause-061914"
	Nov 22 01:02:26 pause-061914 kubelet[3871]: I1122 01:02:26.379451    3871 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 22 01:02:26 pause-061914 kubelet[3871]: I1122 01:02:26.381950    3871 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 22 01:02:26 pause-061914 kubelet[3871]: E1122 01:02:26.828230    3871 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-061914\" already exists" pod="kube-system/kube-controller-manager-pause-061914"
	Nov 22 01:02:26 pause-061914 kubelet[3871]: I1122 01:02:26.828289    3871 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-061914"
	Nov 22 01:02:27 pause-061914 kubelet[3871]: E1122 01:02:27.359703    3871 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-061914\" already exists" pod="kube-system/kube-scheduler-pause-061914"
	Nov 22 01:02:27 pause-061914 kubelet[3871]: I1122 01:02:27.359752    3871 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-061914"
	Nov 22 01:02:27 pause-061914 kubelet[3871]: E1122 01:02:27.562458    3871 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-pause-061914\" already exists" pod="kube-system/etcd-pause-061914"
	Nov 22 01:02:30 pause-061914 kubelet[3871]: E1122 01:02:30.960639    3871 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763773350959577617  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Nov 22 01:02:30 pause-061914 kubelet[3871]: E1122 01:02:30.960979    3871 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763773350959577617  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Nov 22 01:02:40 pause-061914 kubelet[3871]: E1122 01:02:40.965357    3871 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763773360964024769  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Nov 22 01:02:40 pause-061914 kubelet[3871]: E1122 01:02:40.965379    3871 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763773360964024769  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-061914 -n pause-061914
helpers_test.go:269: (dbg) Run:  kubectl --context pause-061914 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (95.17s)

                                                
                                    

Test pass (293/345)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 6.95
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.17
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.34.1/json-events 4.11
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.08
18 TestDownloadOnly/v1.34.1/DeleteAll 0.17
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.15
21 TestBinaryMirror 0.66
22 TestOffline 75.98
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 129.73
31 TestAddons/serial/GCPAuth/Namespaces 0.15
32 TestAddons/serial/GCPAuth/FakeCredentials 9.55
35 TestAddons/parallel/Registry 16.04
36 TestAddons/parallel/RegistryCreds 0.84
38 TestAddons/parallel/InspektorGadget 11.83
39 TestAddons/parallel/MetricsServer 6.19
42 TestAddons/parallel/Headlamp 21.99
43 TestAddons/parallel/CloudSpanner 5.59
45 TestAddons/parallel/NvidiaDevicePlugin 6.58
46 TestAddons/parallel/Yakd 10.84
48 TestAddons/StoppedEnableDisable 88.01
49 TestCertOptions 48.38
50 TestCertExpiration 293.31
52 TestForceSystemdFlag 88.13
53 TestForceSystemdEnv 43.06
58 TestErrorSpam/setup 39.27
59 TestErrorSpam/start 0.36
60 TestErrorSpam/status 0.7
61 TestErrorSpam/pause 1.58
62 TestErrorSpam/unpause 1.79
63 TestErrorSpam/stop 88.56
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 55.21
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 51.43
70 TestFunctional/serial/KubeContext 0.06
71 TestFunctional/serial/KubectlGetPods 0.09
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.2
75 TestFunctional/serial/CacheCmd/cache/add_local 1.15
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
77 TestFunctional/serial/CacheCmd/cache/list 0.07
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.2
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.64
80 TestFunctional/serial/CacheCmd/cache/delete 0.14
81 TestFunctional/serial/MinikubeKubectlCmd 0.14
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
83 TestFunctional/serial/ExtraConfig 36.15
84 TestFunctional/serial/ComponentHealth 0.09
85 TestFunctional/serial/LogsCmd 1.56
86 TestFunctional/serial/LogsFileCmd 1.56
87 TestFunctional/serial/InvalidService 4.37
89 TestFunctional/parallel/ConfigCmd 0.51
91 TestFunctional/parallel/DryRun 0.25
92 TestFunctional/parallel/InternationalLanguage 0.13
93 TestFunctional/parallel/StatusCmd 0.81
97 TestFunctional/parallel/ServiceCmdConnect 9.54
98 TestFunctional/parallel/AddonsCmd 0.17
101 TestFunctional/parallel/SSHCmd 0.38
102 TestFunctional/parallel/CpCmd 1.45
104 TestFunctional/parallel/FileSync 0.23
105 TestFunctional/parallel/CertSync 1.06
109 TestFunctional/parallel/NodeLabels 0.1
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.43
113 TestFunctional/parallel/License 0.27
114 TestFunctional/parallel/Version/short 0.07
115 TestFunctional/parallel/Version/components 0.48
116 TestFunctional/parallel/ImageCommands/ImageListShort 0.2
117 TestFunctional/parallel/ImageCommands/ImageListTable 0.2
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.21
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.2
120 TestFunctional/parallel/ImageCommands/ImageBuild 3.05
121 TestFunctional/parallel/ImageCommands/Setup 0.42
122 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.69
123 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.92
124 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.04
125 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.54
126 TestFunctional/parallel/ImageCommands/ImageRemove 0.51
127 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.74
128 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.6
139 TestFunctional/parallel/ProfileCmd/profile_not_create 0.35
140 TestFunctional/parallel/ProfileCmd/profile_list 0.35
141 TestFunctional/parallel/ProfileCmd/profile_json_output 0.37
142 TestFunctional/parallel/MountCmd/any-port 93.17
143 TestFunctional/parallel/MountCmd/specific-port 1.54
144 TestFunctional/parallel/MountCmd/VerifyCleanup 1.11
145 TestFunctional/parallel/UpdateContextCmd/no_changes 0.08
146 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.08
147 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.08
148 TestFunctional/parallel/ServiceCmd/List 1.21
149 TestFunctional/parallel/ServiceCmd/JSONOutput 1.23
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
160 TestMultiControlPlane/serial/StartCluster 214.95
161 TestMultiControlPlane/serial/DeployApp 6.68
162 TestMultiControlPlane/serial/PingHostFromPods 1.37
163 TestMultiControlPlane/serial/AddWorkerNode 43.29
164 TestMultiControlPlane/serial/NodeLabels 0.08
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.69
166 TestMultiControlPlane/serial/CopyFile 11.06
167 TestMultiControlPlane/serial/StopSecondaryNode 88.69
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.52
169 TestMultiControlPlane/serial/RestartSecondaryNode 42.86
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.94
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 399.34
172 TestMultiControlPlane/serial/DeleteSecondaryNode 18.57
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.55
174 TestMultiControlPlane/serial/StopCluster 244.18
175 TestMultiControlPlane/serial/RestartCluster 126.55
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.52
177 TestMultiControlPlane/serial/AddSecondaryNode 87.62
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.73
183 TestJSONOutput/start/Command 80.89
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.73
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.65
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 7
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.25
211 TestMainNoArgs 0.06
212 TestMinikubeProfile 88.31
215 TestMountStart/serial/StartWithMountFirst 22.49
216 TestMountStart/serial/VerifyMountFirst 0.32
217 TestMountStart/serial/StartWithMountSecond 20.89
218 TestMountStart/serial/VerifyMountSecond 0.32
219 TestMountStart/serial/DeleteFirst 0.69
220 TestMountStart/serial/VerifyMountPostDelete 0.32
221 TestMountStart/serial/Stop 1.39
222 TestMountStart/serial/RestartStopped 20.88
223 TestMountStart/serial/VerifyMountPostStop 0.3
226 TestMultiNode/serial/FreshStart2Nodes 130.83
227 TestMultiNode/serial/DeployApp2Nodes 5.39
228 TestMultiNode/serial/PingHostFrom2Pods 0.88
229 TestMultiNode/serial/AddNode 43.9
230 TestMultiNode/serial/MultiNodeLabels 0.06
231 TestMultiNode/serial/ProfileList 0.48
232 TestMultiNode/serial/CopyFile 6.28
233 TestMultiNode/serial/StopNode 2.31
234 TestMultiNode/serial/StartAfterStop 45.74
235 TestMultiNode/serial/RestartKeepsNodes 300.89
236 TestMultiNode/serial/DeleteNode 2.68
237 TestMultiNode/serial/StopMultiNode 175.65
238 TestMultiNode/serial/RestartMultiNode 87.61
239 TestMultiNode/serial/ValidateNameConflict 42.34
246 TestScheduledStopUnix 114.4
250 TestRunningBinaryUpgrade 137.22
252 TestKubernetesUpgrade 207.06
258 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
259 TestNoKubernetes/serial/StartWithK8s 87.33
264 TestNetworkPlugins/group/false 3.61
268 TestStoppedBinaryUpgrade/Setup 0.49
269 TestStoppedBinaryUpgrade/Upgrade 143.54
270 TestNoKubernetes/serial/StartWithStopK8s 47.48
271 TestNoKubernetes/serial/Start 55.66
272 TestStoppedBinaryUpgrade/MinikubeLogs 1.14
281 TestPause/serial/Start 73.99
282 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
283 TestNoKubernetes/serial/VerifyK8sNotRunning 0.16
284 TestNoKubernetes/serial/ProfileList 0.99
285 TestNoKubernetes/serial/Stop 1.37
286 TestNoKubernetes/serial/StartNoArgs 50.74
287 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.16
288 TestISOImage/Setup 62.42
291 TestISOImage/Binaries/crictl 0.18
292 TestISOImage/Binaries/curl 0.22
293 TestISOImage/Binaries/docker 0.2
294 TestISOImage/Binaries/git 0.19
295 TestISOImage/Binaries/iptables 0.21
296 TestISOImage/Binaries/podman 0.21
297 TestISOImage/Binaries/rsync 0.19
298 TestISOImage/Binaries/socat 0.18
299 TestISOImage/Binaries/wget 0.19
300 TestISOImage/Binaries/VBoxControl 0.2
301 TestISOImage/Binaries/VBoxService 0.19
302 TestNetworkPlugins/group/auto/Start 106.73
303 TestNetworkPlugins/group/kindnet/Start 60.71
304 TestNetworkPlugins/group/flannel/Start 86.84
305 TestNetworkPlugins/group/auto/KubeletFlags 0.18
306 TestNetworkPlugins/group/auto/NetCatPod 10.31
307 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
308 TestNetworkPlugins/group/kindnet/KubeletFlags 0.18
309 TestNetworkPlugins/group/kindnet/NetCatPod 10.26
310 TestNetworkPlugins/group/auto/DNS 0.19
311 TestNetworkPlugins/group/auto/Localhost 0.17
312 TestNetworkPlugins/group/auto/HairPin 0.15
313 TestNetworkPlugins/group/kindnet/DNS 0.18
314 TestNetworkPlugins/group/kindnet/Localhost 0.15
315 TestNetworkPlugins/group/kindnet/HairPin 0.15
316 TestNetworkPlugins/group/enable-default-cni/Start 82.72
317 TestNetworkPlugins/group/flannel/ControllerPod 6.01
318 TestNetworkPlugins/group/custom-flannel/Start 89.2
319 TestNetworkPlugins/group/flannel/KubeletFlags 0.2
320 TestNetworkPlugins/group/flannel/NetCatPod 10.26
321 TestNetworkPlugins/group/flannel/DNS 0.18
322 TestNetworkPlugins/group/flannel/Localhost 0.15
323 TestNetworkPlugins/group/flannel/HairPin 0.16
324 TestNetworkPlugins/group/bridge/Start 96.86
325 TestNetworkPlugins/group/calico/Start 101.94
326 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.2
327 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.28
328 TestNetworkPlugins/group/enable-default-cni/DNS 0.21
329 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
330 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
331 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.21
332 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.27
333 TestNetworkPlugins/group/custom-flannel/DNS 0.2
334 TestNetworkPlugins/group/custom-flannel/Localhost 0.17
335 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
337 TestStartStop/group/old-k8s-version/serial/FirstStart 96.03
339 TestStartStop/group/no-preload/serial/FirstStart 86.79
340 TestNetworkPlugins/group/bridge/KubeletFlags 0.27
341 TestNetworkPlugins/group/bridge/NetCatPod 13.65
342 TestNetworkPlugins/group/bridge/DNS 0.15
343 TestNetworkPlugins/group/bridge/Localhost 0.16
344 TestNetworkPlugins/group/bridge/HairPin 0.17
346 TestStartStop/group/embed-certs/serial/FirstStart 88.24
347 TestNetworkPlugins/group/calico/ControllerPod 6.01
348 TestNetworkPlugins/group/calico/KubeletFlags 0.27
349 TestNetworkPlugins/group/calico/NetCatPod 14.05
350 TestNetworkPlugins/group/calico/DNS 0.19
351 TestNetworkPlugins/group/calico/Localhost 0.16
352 TestNetworkPlugins/group/calico/HairPin 0.16
354 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 85.35
355 TestStartStop/group/old-k8s-version/serial/DeployApp 10.42
356 TestStartStop/group/no-preload/serial/DeployApp 9.33
357 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.55
358 TestStartStop/group/old-k8s-version/serial/Stop 82.39
359 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.18
360 TestStartStop/group/no-preload/serial/Stop 73.16
361 TestStartStop/group/embed-certs/serial/DeployApp 10.29
362 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.99
363 TestStartStop/group/embed-certs/serial/Stop 81.51
364 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.29
365 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.16
366 TestStartStop/group/no-preload/serial/SecondStart 59.24
367 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.09
368 TestStartStop/group/default-k8s-diff-port/serial/Stop 87.85
369 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
370 TestStartStop/group/old-k8s-version/serial/SecondStart 58.69
371 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
372 TestStartStop/group/embed-certs/serial/SecondStart 50.6
373 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
374 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
375 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 13.01
376 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.23
377 TestStartStop/group/no-preload/serial/Pause 3.17
379 TestStartStop/group/newest-cni/serial/FirstStart 50.93
380 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
381 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
382 TestStartStop/group/old-k8s-version/serial/Pause 3.27
384 TestISOImage/PersistentMounts//data 0.21
385 TestISOImage/PersistentMounts//var/lib/docker 0.19
386 TestISOImage/PersistentMounts//var/lib/cni 0.19
387 TestISOImage/PersistentMounts//var/lib/kubelet 0.17
388 TestISOImage/PersistentMounts//var/lib/minikube 0.19
389 TestISOImage/PersistentMounts//var/lib/toolbox 0.19
390 TestISOImage/PersistentMounts//var/lib/boot2docker 0.19
391 TestISOImage/VersionJSON 0.19
392 TestISOImage/eBPFSupport 0.19
393 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.16
394 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 51.46
395 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
396 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
397 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
398 TestStartStop/group/embed-certs/serial/Pause 2.96
399 TestStartStop/group/newest-cni/serial/DeployApp 0
400 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.28
401 TestStartStop/group/newest-cni/serial/Stop 7.83
402 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.17
403 TestStartStop/group/newest-cni/serial/SecondStart 36.21
404 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
405 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
406 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
407 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.35
408 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
409 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
410 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
411 TestStartStop/group/newest-cni/serial/Pause 3.21
x
+
TestDownloadOnly/v1.28.0/json-events (6.95s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-246895 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-246895 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (6.947277295s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (6.95s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1121 23:46:42.126008  250664 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1121 23:46:42.126087  250664 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21934-244751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-246895
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-246895: exit status 85 (77.204659ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-246895 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-246895 │ jenkins │ v1.37.0 │ 21 Nov 25 23:46 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 23:46:35
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 23:46:35.234518  250676 out.go:360] Setting OutFile to fd 1 ...
	I1121 23:46:35.234852  250676 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:46:35.234864  250676 out.go:374] Setting ErrFile to fd 2...
	I1121 23:46:35.234868  250676 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:46:35.235080  250676 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-244751/.minikube/bin
	W1121 23:46:35.235222  250676 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21934-244751/.minikube/config/config.json: open /home/jenkins/minikube-integration/21934-244751/.minikube/config/config.json: no such file or directory
	I1121 23:46:35.235727  250676 out.go:368] Setting JSON to true
	I1121 23:46:35.236626  250676 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":26923,"bootTime":1763741872,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1121 23:46:35.236698  250676 start.go:143] virtualization: kvm guest
	I1121 23:46:35.241735  250676 out.go:99] [download-only-246895] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1121 23:46:35.241921  250676 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21934-244751/.minikube/cache/preloaded-tarball: no such file or directory
	I1121 23:46:35.241935  250676 notify.go:221] Checking for updates...
	I1121 23:46:35.243385  250676 out.go:171] MINIKUBE_LOCATION=21934
	I1121 23:46:35.244917  250676 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 23:46:35.246784  250676 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21934-244751/kubeconfig
	I1121 23:46:35.248466  250676 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-244751/.minikube
	I1121 23:46:35.249953  250676 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1121 23:46:35.252576  250676 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1121 23:46:35.252819  250676 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 23:46:35.288116  250676 out.go:99] Using the kvm2 driver based on user configuration
	I1121 23:46:35.288156  250676 start.go:309] selected driver: kvm2
	I1121 23:46:35.288163  250676 start.go:930] validating driver "kvm2" against <nil>
	I1121 23:46:35.288476  250676 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1121 23:46:35.289041  250676 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1121 23:46:35.289187  250676 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1121 23:46:35.289219  250676 cni.go:84] Creating CNI manager for ""
	I1121 23:46:35.289279  250676 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1121 23:46:35.289290  250676 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1121 23:46:35.289343  250676 start.go:353] cluster config:
	{Name:download-only-246895 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-246895 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 23:46:35.289545  250676 iso.go:125] acquiring lock: {Name:mkc83d3435f1eaa5a92358fc78f85b7d74048deb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 23:46:35.291225  250676 out.go:99] Downloading VM boot image ...
	I1121 23:46:35.291263  250676 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso.sha256 -> /home/jenkins/minikube-integration/21934-244751/.minikube/cache/iso/amd64/minikube-v1.37.0-1763503576-21924-amd64.iso
	I1121 23:46:38.696566  250676 out.go:99] Starting "download-only-246895" primary control-plane node in "download-only-246895" cluster
	I1121 23:46:38.696617  250676 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1121 23:46:38.711517  250676 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1121 23:46:38.711557  250676 cache.go:65] Caching tarball of preloaded images
	I1121 23:46:38.711796  250676 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1121 23:46:38.713640  250676 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1121 23:46:38.713666  250676 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1121 23:46:38.734398  250676 preload.go:295] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1121 23:46:38.734549  250676 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21934-244751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-246895 host does not exist
	  To start a cluster, run: "minikube start -p download-only-246895"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-246895
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (4.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-263491 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-263491 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (4.113470535s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (4.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1121 23:46:46.640235  250664 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1121 23:46:46.640292  250664 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21934-244751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-263491
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-263491: exit status 85 (78.0903ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-246895 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-246895 │ jenkins │ v1.37.0 │ 21 Nov 25 23:46 UTC │                     │
	│ delete  │ --all                                                                                                                                                                   │ minikube             │ jenkins │ v1.37.0 │ 21 Nov 25 23:46 UTC │ 21 Nov 25 23:46 UTC │
	│ delete  │ -p download-only-246895                                                                                                                                                 │ download-only-246895 │ jenkins │ v1.37.0 │ 21 Nov 25 23:46 UTC │ 21 Nov 25 23:46 UTC │
	│ start   │ -o=json --download-only -p download-only-263491 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-263491 │ jenkins │ v1.37.0 │ 21 Nov 25 23:46 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 23:46:42
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 23:46:42.580242  250875 out.go:360] Setting OutFile to fd 1 ...
	I1121 23:46:42.580365  250875 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:46:42.580375  250875 out.go:374] Setting ErrFile to fd 2...
	I1121 23:46:42.580379  250875 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:46:42.580598  250875 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-244751/.minikube/bin
	I1121 23:46:42.581063  250875 out.go:368] Setting JSON to true
	I1121 23:46:42.581924  250875 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":26931,"bootTime":1763741872,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1121 23:46:42.581985  250875 start.go:143] virtualization: kvm guest
	I1121 23:46:42.583894  250875 out.go:99] [download-only-263491] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1121 23:46:42.584078  250875 notify.go:221] Checking for updates...
	I1121 23:46:42.585257  250875 out.go:171] MINIKUBE_LOCATION=21934
	I1121 23:46:42.586773  250875 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 23:46:42.588248  250875 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21934-244751/kubeconfig
	I1121 23:46:42.589318  250875 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-244751/.minikube
	I1121 23:46:42.590355  250875 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-263491 host does not exist
	  To start a cluster, run: "minikube start -p download-only-263491"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-263491
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.66s)

                                                
                                                
=== RUN   TestBinaryMirror
I1121 23:46:47.350724  250664 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-996598 --alsologtostderr --binary-mirror http://127.0.0.1:41123 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-996598" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-996598
--- PASS: TestBinaryMirror (0.66s)

                                                
                                    
x
+
TestOffline (75.98s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-950982 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-950982 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m15.040082144s)
helpers_test.go:175: Cleaning up "offline-crio-950982" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-950982
--- PASS: TestOffline (75.98s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-266876
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-266876: exit status 85 (66.5571ms)

                                                
                                                
-- stdout --
	* Profile "addons-266876" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-266876"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-266876
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-266876: exit status 85 (67.434028ms)

                                                
                                                
-- stdout --
	* Profile "addons-266876" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-266876"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (129.73s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-266876 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-266876 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m9.729350917s)
--- PASS: TestAddons/Setup (129.73s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-266876 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-266876 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.55s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-266876 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-266876 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [6b5956ac-11bb-458f-953a-f0fa68bf575e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [6b5956ac-11bb-458f-953a-f0fa68bf575e] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.004509006s
addons_test.go:694: (dbg) Run:  kubectl --context addons-266876 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-266876 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-266876 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.55s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.04s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 9.797172ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-g5tcd" [5c882c89-9d82-4657-b7a0-d20f145866ab] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.00750161s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-xjdbm" [9ce7be2d-00fc-42ac-8617-38e2d4ecac77] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.012826019s
addons_test.go:392: (dbg) Run:  kubectl --context addons-266876 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-266876 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-266876 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.177296452s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-266876 ip
2025/11/21 23:49:31 [DEBUG] GET http://192.168.39.50:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-266876 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.04s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.84s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 8.059161ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-266876
addons_test.go:332: (dbg) Run:  kubectl --context addons-266876 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-266876 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.84s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.83s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-lj59d" [7c271578-c4c6-438a-a2da-01249af27704] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.006036578s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-266876 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-266876 addons disable inspektor-gadget --alsologtostderr -v=1: (5.825500891s)
--- PASS: TestAddons/parallel/InspektorGadget (11.83s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.19s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 10.394738ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-tcd7p" [8b9a51ce-61d0-430c-98de-9174d78d47d6] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.006689284s
addons_test.go:463: (dbg) Run:  kubectl --context addons-266876 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-266876 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-266876 addons disable metrics-server --alsologtostderr -v=1: (1.079146691s)
--- PASS: TestAddons/parallel/MetricsServer (6.19s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (21.99s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-266876 --alsologtostderr -v=1
addons_test.go:808: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-266876 --alsologtostderr -v=1: (1.114818409s)
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6945c6f4d-ks2sf" [1055e04d-0232-4002-ae2b-d80a86f84f47] Pending
helpers_test.go:352: "headlamp-6945c6f4d-ks2sf" [1055e04d-0232-4002-ae2b-d80a86f84f47] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6945c6f4d-ks2sf" [1055e04d-0232-4002-ae2b-d80a86f84f47] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 15.007286795s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-266876 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-266876 addons disable headlamp --alsologtostderr -v=1: (5.870802899s)
--- PASS: TestAddons/parallel/Headlamp (21.99s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.59s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-6f9fcf858b-pknqz" [7c33bb29-0482-48bc-80ac-40bc0d800b01] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.00482147s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-266876 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.59s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.58s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-6fx49" [4603aca8-96ff-429c-870e-aaa1a7987b07] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.005786507s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-266876 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.58s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.84s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-9dnb2" [76452955-aa31-435f-92e7-89e3c7af1821] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.005614609s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-266876 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-266876 addons disable yakd --alsologtostderr -v=1: (5.834236849s)
--- PASS: TestAddons/parallel/Yakd (10.84s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (88.01s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-266876
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-266876: (1m27.794872757s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-266876
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-266876
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-266876
--- PASS: TestAddons/StoppedEnableDisable (88.01s)

                                                
                                    
x
+
TestCertOptions (48.38s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-078413 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-078413 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (46.878317706s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-078413 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-078413 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-078413 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-078413" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-078413
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-078413: (1.037840305s)
--- PASS: TestCertOptions (48.38s)

                                                
                                    
x
+
TestCertExpiration (293.31s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-302431 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-302431 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m10.517968162s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-302431 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-302431 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (41.842947038s)
helpers_test.go:175: Cleaning up "cert-expiration-302431" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-302431
--- PASS: TestCertExpiration (293.31s)

                                                
                                    
x
+
TestForceSystemdFlag (88.13s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-555638 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-555638 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m26.988586108s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-555638 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-555638" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-555638
--- PASS: TestForceSystemdFlag (88.13s)

                                                
                                    
x
+
TestForceSystemdEnv (43.06s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-073043 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-073043 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (42.135556866s)
helpers_test.go:175: Cleaning up "force-systemd-env-073043" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-073043
--- PASS: TestForceSystemdEnv (43.06s)

                                                
                                    
x
+
TestErrorSpam/setup (39.27s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-174719 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-174719 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-174719 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-174719 --driver=kvm2  --container-runtime=crio: (39.2690986s)
--- PASS: TestErrorSpam/setup (39.27s)

                                                
                                    
x
+
TestErrorSpam/start (0.36s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-174719 --log_dir /tmp/nospam-174719 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-174719 --log_dir /tmp/nospam-174719 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-174719 --log_dir /tmp/nospam-174719 start --dry-run
--- PASS: TestErrorSpam/start (0.36s)

                                                
                                    
x
+
TestErrorSpam/status (0.7s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-174719 --log_dir /tmp/nospam-174719 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-174719 --log_dir /tmp/nospam-174719 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-174719 --log_dir /tmp/nospam-174719 status
--- PASS: TestErrorSpam/status (0.70s)

                                                
                                    
x
+
TestErrorSpam/pause (1.58s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-174719 --log_dir /tmp/nospam-174719 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-174719 --log_dir /tmp/nospam-174719 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-174719 --log_dir /tmp/nospam-174719 pause
--- PASS: TestErrorSpam/pause (1.58s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.79s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-174719 --log_dir /tmp/nospam-174719 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-174719 --log_dir /tmp/nospam-174719 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-174719 --log_dir /tmp/nospam-174719 unpause
--- PASS: TestErrorSpam/unpause (1.79s)

                                                
                                    
x
+
TestErrorSpam/stop (88.56s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-174719 --log_dir /tmp/nospam-174719 stop
E1121 23:58:58.482495  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 23:58:58.489040  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 23:58:58.500504  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 23:58:58.521957  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 23:58:58.563524  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 23:58:58.645081  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 23:58:58.806766  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 23:58:59.128593  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 23:58:59.770805  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 23:59:01.052507  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 23:59:03.615543  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 23:59:08.737316  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 23:59:18.979161  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-174719 --log_dir /tmp/nospam-174719 stop: (1m25.324135453s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-174719 --log_dir /tmp/nospam-174719 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-174719 --log_dir /tmp/nospam-174719 stop: (1.29447258s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-174719 --log_dir /tmp/nospam-174719 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-174719 --log_dir /tmp/nospam-174719 stop: (1.944837172s)
--- PASS: TestErrorSpam/stop (88.56s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21934-244751/.minikube/files/etc/test/nested/copy/250664/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (55.21s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-783762 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E1121 23:59:39.460949  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:00:20.423915  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-783762 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (55.211457685s)
--- PASS: TestFunctional/serial/StartWithProxy (55.21s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (51.43s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1122 00:00:28.086515  250664 config.go:182] Loaded profile config "functional-783762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-783762 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-783762 --alsologtostderr -v=8: (51.432579346s)
functional_test.go:678: soft start took 51.433380643s for "functional-783762" cluster.
I1122 00:01:19.519516  250664 config.go:182] Loaded profile config "functional-783762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (51.43s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-783762 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-783762 cache add registry.k8s.io/pause:3.1: (1.01996758s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-783762 cache add registry.k8s.io/pause:3.3: (1.089765496s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-783762 cache add registry.k8s.io/pause:latest: (1.086497877s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-783762 /tmp/TestFunctionalserialCacheCmdcacheadd_local1003889025/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 cache add minikube-local-cache-test:functional-783762
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 cache delete minikube-local-cache-test:functional-783762
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-783762
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.15s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.64s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-783762 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (205.650572ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.64s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 kubectl -- --context functional-783762 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-783762 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (36.15s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-783762 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1122 00:01:42.347151  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-783762 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.145406439s)
functional_test.go:776: restart took 36.145551304s for "functional-783762" cluster.
I1122 00:02:02.548965  250664 config.go:182] Loaded profile config "functional-783762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (36.15s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-783762 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.56s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-783762 logs: (1.556126582s)
--- PASS: TestFunctional/serial/LogsCmd (1.56s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.56s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 logs --file /tmp/TestFunctionalserialLogsFileCmd2793950274/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-783762 logs --file /tmp/TestFunctionalserialLogsFileCmd2793950274/001/logs.txt: (1.55616733s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.56s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.37s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-783762 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-783762
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-783762: exit status 115 (270.756403ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL             │
	├───────────┼─────────────┼─────────────┼────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.76:31817 │
	└───────────┴─────────────┴─────────────┴────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-783762 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.37s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-783762 config get cpus: exit status 14 (91.947315ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-783762 config get cpus: exit status 14 (78.65984ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-783762 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-783762 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (132.091458ms)

                                                
                                                
-- stdout --
	* [functional-783762] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21934
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21934-244751/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-244751/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1122 00:03:56.619976  260620 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:03:56.620226  260620 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:03:56.620237  260620 out.go:374] Setting ErrFile to fd 2...
	I1122 00:03:56.620244  260620 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:03:56.620481  260620 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-244751/.minikube/bin
	I1122 00:03:56.620968  260620 out.go:368] Setting JSON to false
	I1122 00:03:56.621871  260620 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":27965,"bootTime":1763741872,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1122 00:03:56.621940  260620 start.go:143] virtualization: kvm guest
	I1122 00:03:56.624341  260620 out.go:179] * [functional-783762] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1122 00:03:56.626290  260620 notify.go:221] Checking for updates...
	I1122 00:03:56.626306  260620 out.go:179]   - MINIKUBE_LOCATION=21934
	I1122 00:03:56.627973  260620 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 00:03:56.629527  260620 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-244751/kubeconfig
	I1122 00:03:56.630879  260620 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-244751/.minikube
	I1122 00:03:56.632339  260620 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1122 00:03:56.633843  260620 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 00:03:56.635806  260620 config.go:182] Loaded profile config "functional-783762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:03:56.636552  260620 driver.go:422] Setting default libvirt URI to qemu:///system
	I1122 00:03:56.669953  260620 out.go:179] * Using the kvm2 driver based on existing profile
	I1122 00:03:56.671370  260620 start.go:309] selected driver: kvm2
	I1122 00:03:56.671392  260620 start.go:930] validating driver "kvm2" against &{Name:functional-783762 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-783762 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.76 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:03:56.671562  260620 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 00:03:56.674056  260620 out.go:203] 
	W1122 00:03:56.675569  260620 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1122 00:03:56.677327  260620 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-783762 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-783762 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-783762 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (127.366319ms)

                                                
                                                
-- stdout --
	* [functional-783762] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21934
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21934-244751/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-244751/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1122 00:03:56.869086  260652 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:03:56.869638  260652 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:03:56.869656  260652 out.go:374] Setting ErrFile to fd 2...
	I1122 00:03:56.869662  260652 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:03:56.870298  260652 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-244751/.minikube/bin
	I1122 00:03:56.871056  260652 out.go:368] Setting JSON to false
	I1122 00:03:56.872195  260652 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":27965,"bootTime":1763741872,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1122 00:03:56.872344  260652 start.go:143] virtualization: kvm guest
	I1122 00:03:56.874227  260652 out.go:179] * [functional-783762] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1122 00:03:56.875684  260652 out.go:179]   - MINIKUBE_LOCATION=21934
	I1122 00:03:56.875758  260652 notify.go:221] Checking for updates...
	I1122 00:03:56.878576  260652 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 00:03:56.880113  260652 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-244751/kubeconfig
	I1122 00:03:56.884960  260652 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-244751/.minikube
	I1122 00:03:56.886574  260652 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1122 00:03:56.887970  260652 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 00:03:56.889736  260652 config.go:182] Loaded profile config "functional-783762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:03:56.890303  260652 driver.go:422] Setting default libvirt URI to qemu:///system
	I1122 00:03:56.921947  260652 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1122 00:03:56.923317  260652 start.go:309] selected driver: kvm2
	I1122 00:03:56.923337  260652 start.go:930] validating driver "kvm2" against &{Name:functional-783762 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-783762 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.76 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:03:56.923483  260652 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 00:03:56.925829  260652 out.go:203] 
	W1122 00:03:56.927360  260652 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1122 00:03:56.928728  260652 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 status -o json
I1122 00:02:17.793922  250664 retry.go:31] will retry after 2.525079777s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:a5d2e49c-c8c9-4715-ada1-0d2e98962676 ResourceVersion:722 Generation:0 CreationTimestamp:2025-11-22 00:02:17 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001917dd0 VolumeMode:0xc001917de0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
--- PASS: TestFunctional/parallel/StatusCmd (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-783762 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-783762 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-cjd8h" [4abfb066-72db-4ce1-8963-25d7bee932a0] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-cjd8h" [4abfb066-72db-4ce1-8963-25d7bee932a0] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.003449815s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.76:31997
functional_test.go:1680: http://192.168.39.76:31997: success! body:
Request served by hello-node-connect-7d85dfc575-cjd8h

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.76:31997
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.54s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 ssh -n functional-783762 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 cp functional-783762:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1975875555/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 ssh -n functional-783762 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 ssh -n functional-783762 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/250664/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 ssh "sudo cat /etc/test/nested/copy/250664/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/250664.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 ssh "sudo cat /etc/ssl/certs/250664.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/250664.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 ssh "sudo cat /usr/share/ca-certificates/250664.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/2506642.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 ssh "sudo cat /etc/ssl/certs/2506642.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/2506642.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 ssh "sudo cat /usr/share/ca-certificates/2506642.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-783762 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-783762 ssh "sudo systemctl is-active docker": exit status 1 (193.954661ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-783762 ssh "sudo systemctl is-active containerd": exit status 1 (235.970301ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-783762 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-783762
localhost/kicbase/echo-server:functional-783762
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-783762 image ls --format short --alsologtostderr:
I1122 00:08:24.881999  261745 out.go:360] Setting OutFile to fd 1 ...
I1122 00:08:24.882343  261745 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1122 00:08:24.882356  261745 out.go:374] Setting ErrFile to fd 2...
I1122 00:08:24.882363  261745 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1122 00:08:24.882703  261745 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-244751/.minikube/bin
I1122 00:08:24.883551  261745 config.go:182] Loaded profile config "functional-783762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1122 00:08:24.883753  261745 config.go:182] Loaded profile config "functional-783762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1122 00:08:24.886801  261745 ssh_runner.go:195] Run: systemctl --version
I1122 00:08:24.889624  261745 main.go:143] libmachine: domain functional-783762 has defined MAC address 52:54:00:34:a6:b6 in network mk-functional-783762
I1122 00:08:24.890282  261745 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:34:a6:b6", ip: ""} in network mk-functional-783762: {Iface:virbr1 ExpiryTime:2025-11-22 00:59:48 +0000 UTC Type:0 Mac:52:54:00:34:a6:b6 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:functional-783762 Clientid:01:52:54:00:34:a6:b6}
I1122 00:08:24.890320  261745 main.go:143] libmachine: domain functional-783762 has defined IP address 192.168.39.76 and MAC address 52:54:00:34:a6:b6 in network mk-functional-783762
I1122 00:08:24.890514  261745 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/functional-783762/id_rsa Username:docker}
I1122 00:08:24.973372  261745 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-783762 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.95MB │
│ localhost/kicbase/echo-server           │ functional-783762  │ 9056ab77afb8e │ 4.95MB │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ localhost/my-image                      │ functional-783762  │ 8e89f7eb1cf3c │ 1.47MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ localhost/minikube-local-cache-test     │ functional-783762  │ d1a0aef862e13 │ 3.33kB │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-783762 image ls --format table --alsologtostderr:
I1122 00:08:28.537359  261809 out.go:360] Setting OutFile to fd 1 ...
I1122 00:08:28.537619  261809 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1122 00:08:28.537630  261809 out.go:374] Setting ErrFile to fd 2...
I1122 00:08:28.537635  261809 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1122 00:08:28.537825  261809 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-244751/.minikube/bin
I1122 00:08:28.538390  261809 config.go:182] Loaded profile config "functional-783762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1122 00:08:28.538491  261809 config.go:182] Loaded profile config "functional-783762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1122 00:08:28.541162  261809 ssh_runner.go:195] Run: systemctl --version
I1122 00:08:28.544194  261809 main.go:143] libmachine: domain functional-783762 has defined MAC address 52:54:00:34:a6:b6 in network mk-functional-783762
I1122 00:08:28.544789  261809 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:34:a6:b6", ip: ""} in network mk-functional-783762: {Iface:virbr1 ExpiryTime:2025-11-22 00:59:48 +0000 UTC Type:0 Mac:52:54:00:34:a6:b6 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:functional-783762 Clientid:01:52:54:00:34:a6:b6}
I1122 00:08:28.544821  261809 main.go:143] libmachine: domain functional-783762 has defined IP address 192.168.39.76 and MAC address 52:54:00:34:a6:b6 in network mk-functional-783762
I1122 00:08:28.545039  261809 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/functional-783762/id_rsa Username:docker}
I1122 00:08:28.629307  261809 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-783762 image ls --format json --alsologtostderr:
[{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"d1a0aef862e13e0f44ad2ef92591b29f1fa2075dc78eb0a9955ea38ae5d20deb","repoDigests":["localhost/minikube-local-cache-test@sha256:78e532c67c8d03fd769104c5c4d3b2d0bf7671d3776ed36d4fb6805978a809e7"],"repoTags":["localhost/minikube-local-cache-test:functional-783762"],"size":"3328"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"7610354
7"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf6081
9cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"44818e085e3b96490735f0b5acf8b1fc8c653ebdff70f0438b79b3d8362fb542","repoDigests":["docker.io/library/16c20965500fedac2333b101d7c745fee903ee4e243b50977a3c6f4bcebd33a9-tmp@sha256:2a90f90c9ae3c9c9d7373c3085f58d33703409b06e303a079d0b30a3a1d68714"],"repoTags":[],"size":"1466018"},{"id":"8e89f7eb1cf3ca1552f2df2dacbfecc265cd4b6ec9083325c751f4f24793b5b8","repoDigests":["localhost/my-image@sha256:15bb3d06a03f98f2e2396b2c3938cec0b145fc2a378f34f63d8089465e039d06"],"repoTags":["localhost/my-image:functional-783762"],"size":"1468599"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"
,"registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"
247077"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-783762"],"size":"4945146"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b4610899694
49f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["regist
ry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-783762 image ls --format json --alsologtostderr:
I1122 00:08:28.333016  261798 out.go:360] Setting OutFile to fd 1 ...
I1122 00:08:28.333301  261798 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1122 00:08:28.333311  261798 out.go:374] Setting ErrFile to fd 2...
I1122 00:08:28.333315  261798 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1122 00:08:28.333505  261798 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-244751/.minikube/bin
I1122 00:08:28.334100  261798 config.go:182] Loaded profile config "functional-783762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1122 00:08:28.334201  261798 config.go:182] Loaded profile config "functional-783762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1122 00:08:28.336731  261798 ssh_runner.go:195] Run: systemctl --version
I1122 00:08:28.339531  261798 main.go:143] libmachine: domain functional-783762 has defined MAC address 52:54:00:34:a6:b6 in network mk-functional-783762
I1122 00:08:28.340054  261798 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:34:a6:b6", ip: ""} in network mk-functional-783762: {Iface:virbr1 ExpiryTime:2025-11-22 00:59:48 +0000 UTC Type:0 Mac:52:54:00:34:a6:b6 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:functional-783762 Clientid:01:52:54:00:34:a6:b6}
I1122 00:08:28.340149  261798 main.go:143] libmachine: domain functional-783762 has defined IP address 192.168.39.76 and MAC address 52:54:00:34:a6:b6 in network mk-functional-783762
I1122 00:08:28.340330  261798 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/functional-783762/id_rsa Username:docker}
I1122 00:08:28.426450  261798 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-783762 image ls --format yaml --alsologtostderr:
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-783762
size: "4945146"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: d1a0aef862e13e0f44ad2ef92591b29f1fa2075dc78eb0a9955ea38ae5d20deb
repoDigests:
- localhost/minikube-local-cache-test@sha256:78e532c67c8d03fd769104c5c4d3b2d0bf7671d3776ed36d4fb6805978a809e7
repoTags:
- localhost/minikube-local-cache-test:functional-783762
size: "3328"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-783762 image ls --format yaml --alsologtostderr:
I1122 00:08:25.078118  261755 out.go:360] Setting OutFile to fd 1 ...
I1122 00:08:25.078440  261755 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1122 00:08:25.078458  261755 out.go:374] Setting ErrFile to fd 2...
I1122 00:08:25.078464  261755 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1122 00:08:25.078787  261755 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-244751/.minikube/bin
I1122 00:08:25.079621  261755 config.go:182] Loaded profile config "functional-783762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1122 00:08:25.079769  261755 config.go:182] Loaded profile config "functional-783762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1122 00:08:25.082592  261755 ssh_runner.go:195] Run: systemctl --version
I1122 00:08:25.084929  261755 main.go:143] libmachine: domain functional-783762 has defined MAC address 52:54:00:34:a6:b6 in network mk-functional-783762
I1122 00:08:25.085334  261755 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:34:a6:b6", ip: ""} in network mk-functional-783762: {Iface:virbr1 ExpiryTime:2025-11-22 00:59:48 +0000 UTC Type:0 Mac:52:54:00:34:a6:b6 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:functional-783762 Clientid:01:52:54:00:34:a6:b6}
I1122 00:08:25.085362  261755 main.go:143] libmachine: domain functional-783762 has defined IP address 192.168.39.76 and MAC address 52:54:00:34:a6:b6 in network mk-functional-783762
I1122 00:08:25.085527  261755 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/functional-783762/id_rsa Username:docker}
I1122 00:08:25.168594  261755 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-783762 ssh pgrep buildkitd: exit status 1 (179.082342ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 image build -t localhost/my-image:functional-783762 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-783762 image build -t localhost/my-image:functional-783762 testdata/build --alsologtostderr: (2.654907805s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-783762 image build -t localhost/my-image:functional-783762 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 44818e085e3
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-783762
--> 8e89f7eb1cf
Successfully tagged localhost/my-image:functional-783762
8e89f7eb1cf3ca1552f2df2dacbfecc265cd4b6ec9083325c751f4f24793b5b8
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-783762 image build -t localhost/my-image:functional-783762 testdata/build --alsologtostderr:
I1122 00:08:25.449853  261776 out.go:360] Setting OutFile to fd 1 ...
I1122 00:08:25.450168  261776 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1122 00:08:25.450179  261776 out.go:374] Setting ErrFile to fd 2...
I1122 00:08:25.450183  261776 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1122 00:08:25.450398  261776 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-244751/.minikube/bin
I1122 00:08:25.450985  261776 config.go:182] Loaded profile config "functional-783762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1122 00:08:25.451774  261776 config.go:182] Loaded profile config "functional-783762": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1122 00:08:25.453799  261776 ssh_runner.go:195] Run: systemctl --version
I1122 00:08:25.456015  261776 main.go:143] libmachine: domain functional-783762 has defined MAC address 52:54:00:34:a6:b6 in network mk-functional-783762
I1122 00:08:25.456451  261776 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:34:a6:b6", ip: ""} in network mk-functional-783762: {Iface:virbr1 ExpiryTime:2025-11-22 00:59:48 +0000 UTC Type:0 Mac:52:54:00:34:a6:b6 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:functional-783762 Clientid:01:52:54:00:34:a6:b6}
I1122 00:08:25.456481  261776 main.go:143] libmachine: domain functional-783762 has defined IP address 192.168.39.76 and MAC address 52:54:00:34:a6:b6 in network mk-functional-783762
I1122 00:08:25.456610  261776 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/functional-783762/id_rsa Username:docker}
I1122 00:08:25.545706  261776 build_images.go:162] Building image from path: /tmp/build.3599795864.tar
I1122 00:08:25.545796  261776 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1122 00:08:25.568468  261776 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3599795864.tar
I1122 00:08:25.574496  261776 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3599795864.tar: stat -c "%s %y" /var/lib/minikube/build/build.3599795864.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3599795864.tar': No such file or directory
I1122 00:08:25.574535  261776 ssh_runner.go:362] scp /tmp/build.3599795864.tar --> /var/lib/minikube/build/build.3599795864.tar (3072 bytes)
I1122 00:08:25.610020  261776 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3599795864
I1122 00:08:25.625545  261776 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3599795864 -xf /var/lib/minikube/build/build.3599795864.tar
I1122 00:08:25.638523  261776 crio.go:315] Building image: /var/lib/minikube/build/build.3599795864
I1122 00:08:25.638596  261776 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-783762 /var/lib/minikube/build/build.3599795864 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1122 00:08:28.005222  261776 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-783762 /var/lib/minikube/build/build.3599795864 --cgroup-manager=cgroupfs: (2.366596525s)
I1122 00:08:28.005361  261776 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3599795864
I1122 00:08:28.023165  261776 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3599795864.tar
I1122 00:08:28.039258  261776 build_images.go:218] Built localhost/my-image:functional-783762 from /tmp/build.3599795864.tar
I1122 00:08:28.039304  261776 build_images.go:134] succeeded building to: functional-783762
I1122 00:08:28.039310  261776 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-783762
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 image load --daemon kicbase/echo-server:functional-783762 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-783762 image load --daemon kicbase/echo-server:functional-783762 --alsologtostderr: (1.473917947s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 image load --daemon kicbase/echo-server:functional-783762 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-783762
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 image load --daemon kicbase/echo-server:functional-783762 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 image save kicbase/echo-server:functional-783762 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 image rm kicbase/echo-server:functional-783762 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-783762
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 image save --daemon kicbase/echo-server:functional-783762 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-783762
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "269.73311ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "79.49742ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "301.09533ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "68.172272ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (93.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-783762 /tmp/TestFunctionalparallelMountCmdany-port4147511950/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1763769740735697586" to /tmp/TestFunctionalparallelMountCmdany-port4147511950/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1763769740735697586" to /tmp/TestFunctionalparallelMountCmdany-port4147511950/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1763769740735697586" to /tmp/TestFunctionalparallelMountCmdany-port4147511950/001/test-1763769740735697586
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-783762 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (173.747941ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1122 00:02:20.909846  250664 retry.go:31] will retry after 607.127353ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 22 00:02 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 22 00:02 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 22 00:02 test-1763769740735697586
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 ssh cat /mount-9p/test-1763769740735697586
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-783762 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [47d8e8ac-6e19-45c9-a495-6a6b6848a8e4] Pending
helpers_test.go:352: "busybox-mount" [47d8e8ac-6e19-45c9-a495-6a6b6848a8e4] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [47d8e8ac-6e19-45c9-a495-6a6b6848a8e4] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [47d8e8ac-6e19-45c9-a495-6a6b6848a8e4] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 1m31.004182704s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-783762 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-783762 /tmp/TestFunctionalparallelMountCmdany-port4147511950/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (93.17s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-783762 /tmp/TestFunctionalparallelMountCmdspecific-port3029369325/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-783762 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (168.064774ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1122 00:03:54.075196  250664 retry.go:31] will retry after 648.290133ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-783762 /tmp/TestFunctionalparallelMountCmdspecific-port3029369325/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-783762 ssh "sudo umount -f /mount-9p": exit status 1 (176.418576ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-783762 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-783762 /tmp/TestFunctionalparallelMountCmdspecific-port3029369325/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-783762 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1089069711/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-783762 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1089069711/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-783762 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1089069711/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-783762 ssh "findmnt -T" /mount1: exit status 1 (204.101051ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1122 00:03:55.647175  250664 retry.go:31] will retry after 328.301931ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-783762 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-783762 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1089069711/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-783762 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1089069711/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-783762 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1089069711/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-783762 service list: (1.214184504s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-783762 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-783762 service list -o json: (1.231514983s)
functional_test.go:1504: Took "1.231620069s" to run "out/minikube-linux-amd64 -p functional-783762 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.23s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-783762
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-783762
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-783762
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (214.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E1122 00:13:58.479963  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:15:21.550488  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-032009 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (3m34.353243993s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (214.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-032009 kubectl -- rollout status deployment/busybox: (4.198048203s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 kubectl -- exec busybox-7b57f96db7-92tjv -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 kubectl -- exec busybox-7b57f96db7-nnrbm -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 kubectl -- exec busybox-7b57f96db7-zrbnf -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 kubectl -- exec busybox-7b57f96db7-92tjv -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 kubectl -- exec busybox-7b57f96db7-nnrbm -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 kubectl -- exec busybox-7b57f96db7-zrbnf -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 kubectl -- exec busybox-7b57f96db7-92tjv -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 kubectl -- exec busybox-7b57f96db7-nnrbm -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 kubectl -- exec busybox-7b57f96db7-zrbnf -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 kubectl -- exec busybox-7b57f96db7-92tjv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 kubectl -- exec busybox-7b57f96db7-92tjv -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 kubectl -- exec busybox-7b57f96db7-nnrbm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 kubectl -- exec busybox-7b57f96db7-nnrbm -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 kubectl -- exec busybox-7b57f96db7-zrbnf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 kubectl -- exec busybox-7b57f96db7-zrbnf -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (43.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-032009 node add --alsologtostderr -v 5: (42.586538647s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (43.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-032009 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (11.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 cp testdata/cp-test.txt ha-032009:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 ssh -n ha-032009 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 cp ha-032009:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2285407948/001/cp-test_ha-032009.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 ssh -n ha-032009 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 cp ha-032009:/home/docker/cp-test.txt ha-032009-m02:/home/docker/cp-test_ha-032009_ha-032009-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 ssh -n ha-032009 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 ssh -n ha-032009-m02 "sudo cat /home/docker/cp-test_ha-032009_ha-032009-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 cp ha-032009:/home/docker/cp-test.txt ha-032009-m03:/home/docker/cp-test_ha-032009_ha-032009-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 ssh -n ha-032009 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 ssh -n ha-032009-m03 "sudo cat /home/docker/cp-test_ha-032009_ha-032009-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 cp ha-032009:/home/docker/cp-test.txt ha-032009-m04:/home/docker/cp-test_ha-032009_ha-032009-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 ssh -n ha-032009 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 ssh -n ha-032009-m04 "sudo cat /home/docker/cp-test_ha-032009_ha-032009-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 cp testdata/cp-test.txt ha-032009-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 ssh -n ha-032009-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 cp ha-032009-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2285407948/001/cp-test_ha-032009-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 ssh -n ha-032009-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 cp ha-032009-m02:/home/docker/cp-test.txt ha-032009:/home/docker/cp-test_ha-032009-m02_ha-032009.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 ssh -n ha-032009-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 ssh -n ha-032009 "sudo cat /home/docker/cp-test_ha-032009-m02_ha-032009.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 cp ha-032009-m02:/home/docker/cp-test.txt ha-032009-m03:/home/docker/cp-test_ha-032009-m02_ha-032009-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 ssh -n ha-032009-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 ssh -n ha-032009-m03 "sudo cat /home/docker/cp-test_ha-032009-m02_ha-032009-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 cp ha-032009-m02:/home/docker/cp-test.txt ha-032009-m04:/home/docker/cp-test_ha-032009-m02_ha-032009-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 ssh -n ha-032009-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 ssh -n ha-032009-m04 "sudo cat /home/docker/cp-test_ha-032009-m02_ha-032009-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 cp testdata/cp-test.txt ha-032009-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 ssh -n ha-032009-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 cp ha-032009-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2285407948/001/cp-test_ha-032009-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 ssh -n ha-032009-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 cp ha-032009-m03:/home/docker/cp-test.txt ha-032009:/home/docker/cp-test_ha-032009-m03_ha-032009.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 ssh -n ha-032009-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 ssh -n ha-032009 "sudo cat /home/docker/cp-test_ha-032009-m03_ha-032009.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 cp ha-032009-m03:/home/docker/cp-test.txt ha-032009-m02:/home/docker/cp-test_ha-032009-m03_ha-032009-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 ssh -n ha-032009-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 ssh -n ha-032009-m02 "sudo cat /home/docker/cp-test_ha-032009-m03_ha-032009-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 cp ha-032009-m03:/home/docker/cp-test.txt ha-032009-m04:/home/docker/cp-test_ha-032009-m03_ha-032009-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 ssh -n ha-032009-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 ssh -n ha-032009-m04 "sudo cat /home/docker/cp-test_ha-032009-m03_ha-032009-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 cp testdata/cp-test.txt ha-032009-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 ssh -n ha-032009-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 cp ha-032009-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2285407948/001/cp-test_ha-032009-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 ssh -n ha-032009-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 cp ha-032009-m04:/home/docker/cp-test.txt ha-032009:/home/docker/cp-test_ha-032009-m04_ha-032009.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 ssh -n ha-032009-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 ssh -n ha-032009 "sudo cat /home/docker/cp-test_ha-032009-m04_ha-032009.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 cp ha-032009-m04:/home/docker/cp-test.txt ha-032009-m02:/home/docker/cp-test_ha-032009-m04_ha-032009-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 ssh -n ha-032009-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 ssh -n ha-032009-m02 "sudo cat /home/docker/cp-test_ha-032009-m04_ha-032009-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 cp ha-032009-m04:/home/docker/cp-test.txt ha-032009-m03:/home/docker/cp-test_ha-032009-m04_ha-032009-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 ssh -n ha-032009-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 ssh -n ha-032009-m03 "sudo cat /home/docker/cp-test_ha-032009-m04_ha-032009-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (11.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (88.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 node stop m02 --alsologtostderr -v 5
E1122 00:17:10.379767  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/functional-783762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:17:10.386311  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/functional-783762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:17:10.397768  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/functional-783762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:17:10.419228  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/functional-783762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:17:10.460726  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/functional-783762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:17:10.542323  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/functional-783762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:17:10.703936  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/functional-783762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:17:11.025771  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/functional-783762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:17:11.667919  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/functional-783762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:17:12.949725  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/functional-783762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:17:15.511649  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/functional-783762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:17:20.633563  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/functional-783762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:17:30.875559  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/functional-783762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:17:51.357108  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/functional-783762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-032009 node stop m02 --alsologtostderr -v 5: (1m28.154277732s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-032009 status --alsologtostderr -v 5: exit status 7 (532.731789ms)

                                                
                                                
-- stdout --
	ha-032009
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-032009-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-032009-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-032009-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1122 00:18:31.656882  266319 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:18:31.656987  266319 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:18:31.656995  266319 out.go:374] Setting ErrFile to fd 2...
	I1122 00:18:31.657000  266319 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:18:31.657246  266319 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-244751/.minikube/bin
	I1122 00:18:31.657422  266319 out.go:368] Setting JSON to false
	I1122 00:18:31.657453  266319 mustload.go:66] Loading cluster: ha-032009
	I1122 00:18:31.657513  266319 notify.go:221] Checking for updates...
	I1122 00:18:31.657854  266319 config.go:182] Loaded profile config "ha-032009": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:18:31.657872  266319 status.go:174] checking status of ha-032009 ...
	I1122 00:18:31.660125  266319 status.go:371] ha-032009 host status = "Running" (err=<nil>)
	I1122 00:18:31.660150  266319 host.go:66] Checking if "ha-032009" exists ...
	I1122 00:18:31.663438  266319 main.go:143] libmachine: domain ha-032009 has defined MAC address 52:54:00:25:8b:0f in network mk-ha-032009
	I1122 00:18:31.664025  266319 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:25:8b:0f", ip: ""} in network mk-ha-032009: {Iface:virbr1 ExpiryTime:2025-11-22 01:12:41 +0000 UTC Type:0 Mac:52:54:00:25:8b:0f Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:ha-032009 Clientid:01:52:54:00:25:8b:0f}
	I1122 00:18:31.664073  266319 main.go:143] libmachine: domain ha-032009 has defined IP address 192.168.39.233 and MAC address 52:54:00:25:8b:0f in network mk-ha-032009
	I1122 00:18:31.664241  266319 host.go:66] Checking if "ha-032009" exists ...
	I1122 00:18:31.664535  266319 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:18:31.667584  266319 main.go:143] libmachine: domain ha-032009 has defined MAC address 52:54:00:25:8b:0f in network mk-ha-032009
	I1122 00:18:31.668112  266319 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:25:8b:0f", ip: ""} in network mk-ha-032009: {Iface:virbr1 ExpiryTime:2025-11-22 01:12:41 +0000 UTC Type:0 Mac:52:54:00:25:8b:0f Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:ha-032009 Clientid:01:52:54:00:25:8b:0f}
	I1122 00:18:31.668137  266319 main.go:143] libmachine: domain ha-032009 has defined IP address 192.168.39.233 and MAC address 52:54:00:25:8b:0f in network mk-ha-032009
	I1122 00:18:31.668288  266319 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/ha-032009/id_rsa Username:docker}
	I1122 00:18:31.756296  266319 ssh_runner.go:195] Run: systemctl --version
	I1122 00:18:31.765644  266319 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:18:31.787327  266319 kubeconfig.go:125] found "ha-032009" server: "https://192.168.39.254:8443"
	I1122 00:18:31.787379  266319 api_server.go:166] Checking apiserver status ...
	I1122 00:18:31.787447  266319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:18:31.809156  266319 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1396/cgroup
	W1122 00:18:31.821397  266319 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1396/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1122 00:18:31.821484  266319 ssh_runner.go:195] Run: ls
	I1122 00:18:31.827190  266319 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1122 00:18:31.833404  266319 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1122 00:18:31.833431  266319 status.go:463] ha-032009 apiserver status = Running (err=<nil>)
	I1122 00:18:31.833441  266319 status.go:176] ha-032009 status: &{Name:ha-032009 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1122 00:18:31.833459  266319 status.go:174] checking status of ha-032009-m02 ...
	I1122 00:18:31.835221  266319 status.go:371] ha-032009-m02 host status = "Stopped" (err=<nil>)
	I1122 00:18:31.835246  266319 status.go:384] host is not running, skipping remaining checks
	I1122 00:18:31.835251  266319 status.go:176] ha-032009-m02 status: &{Name:ha-032009-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1122 00:18:31.835267  266319 status.go:174] checking status of ha-032009-m03 ...
	I1122 00:18:31.836658  266319 status.go:371] ha-032009-m03 host status = "Running" (err=<nil>)
	I1122 00:18:31.836687  266319 host.go:66] Checking if "ha-032009-m03" exists ...
	I1122 00:18:31.839530  266319 main.go:143] libmachine: domain ha-032009-m03 has defined MAC address 52:54:00:96:6f:af in network mk-ha-032009
	I1122 00:18:31.840048  266319 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:96:6f:af", ip: ""} in network mk-ha-032009: {Iface:virbr1 ExpiryTime:2025-11-22 01:14:51 +0000 UTC Type:0 Mac:52:54:00:96:6f:af Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-032009-m03 Clientid:01:52:54:00:96:6f:af}
	I1122 00:18:31.840077  266319 main.go:143] libmachine: domain ha-032009-m03 has defined IP address 192.168.39.134 and MAC address 52:54:00:96:6f:af in network mk-ha-032009
	I1122 00:18:31.840218  266319 host.go:66] Checking if "ha-032009-m03" exists ...
	I1122 00:18:31.840479  266319 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:18:31.842823  266319 main.go:143] libmachine: domain ha-032009-m03 has defined MAC address 52:54:00:96:6f:af in network mk-ha-032009
	I1122 00:18:31.843159  266319 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:96:6f:af", ip: ""} in network mk-ha-032009: {Iface:virbr1 ExpiryTime:2025-11-22 01:14:51 +0000 UTC Type:0 Mac:52:54:00:96:6f:af Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:ha-032009-m03 Clientid:01:52:54:00:96:6f:af}
	I1122 00:18:31.843178  266319 main.go:143] libmachine: domain ha-032009-m03 has defined IP address 192.168.39.134 and MAC address 52:54:00:96:6f:af in network mk-ha-032009
	I1122 00:18:31.843307  266319 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/ha-032009-m03/id_rsa Username:docker}
	I1122 00:18:31.930178  266319 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:18:31.955899  266319 kubeconfig.go:125] found "ha-032009" server: "https://192.168.39.254:8443"
	I1122 00:18:31.955930  266319 api_server.go:166] Checking apiserver status ...
	I1122 00:18:31.955964  266319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:18:31.979657  266319 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1831/cgroup
	W1122 00:18:31.993819  266319 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1831/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1122 00:18:31.993890  266319 ssh_runner.go:195] Run: ls
	I1122 00:18:31.999755  266319 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1122 00:18:32.005452  266319 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1122 00:18:32.005479  266319 status.go:463] ha-032009-m03 apiserver status = Running (err=<nil>)
	I1122 00:18:32.005488  266319 status.go:176] ha-032009-m03 status: &{Name:ha-032009-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1122 00:18:32.005504  266319 status.go:174] checking status of ha-032009-m04 ...
	I1122 00:18:32.007305  266319 status.go:371] ha-032009-m04 host status = "Running" (err=<nil>)
	I1122 00:18:32.007324  266319 host.go:66] Checking if "ha-032009-m04" exists ...
	I1122 00:18:32.010428  266319 main.go:143] libmachine: domain ha-032009-m04 has defined MAC address 52:54:00:56:ff:52 in network mk-ha-032009
	I1122 00:18:32.010902  266319 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:ff:52", ip: ""} in network mk-ha-032009: {Iface:virbr1 ExpiryTime:2025-11-22 01:16:25 +0000 UTC Type:0 Mac:52:54:00:56:ff:52 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-032009-m04 Clientid:01:52:54:00:56:ff:52}
	I1122 00:18:32.010931  266319 main.go:143] libmachine: domain ha-032009-m04 has defined IP address 192.168.39.77 and MAC address 52:54:00:56:ff:52 in network mk-ha-032009
	I1122 00:18:32.011075  266319 host.go:66] Checking if "ha-032009-m04" exists ...
	I1122 00:18:32.011288  266319 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:18:32.013523  266319 main.go:143] libmachine: domain ha-032009-m04 has defined MAC address 52:54:00:56:ff:52 in network mk-ha-032009
	I1122 00:18:32.013924  266319 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:ff:52", ip: ""} in network mk-ha-032009: {Iface:virbr1 ExpiryTime:2025-11-22 01:16:25 +0000 UTC Type:0 Mac:52:54:00:56:ff:52 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:ha-032009-m04 Clientid:01:52:54:00:56:ff:52}
	I1122 00:18:32.013944  266319 main.go:143] libmachine: domain ha-032009-m04 has defined IP address 192.168.39.77 and MAC address 52:54:00:56:ff:52 in network mk-ha-032009
	I1122 00:18:32.014114  266319 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/ha-032009-m04/id_rsa Username:docker}
	I1122 00:18:32.102965  266319 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:18:32.124158  266319 status.go:176] ha-032009-m04 status: &{Name:ha-032009-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (88.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E1122 00:18:32.318876  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/functional-783762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (42.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 node start m02 --alsologtostderr -v 5
E1122 00:18:58.473802  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-032009 node start m02 --alsologtostderr -v 5: (41.931098576s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (42.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (399.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 stop --alsologtostderr -v 5
E1122 00:19:54.243012  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/functional-783762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:22:10.379398  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/functional-783762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:22:38.087543  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/functional-783762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-032009 stop --alsologtostderr -v 5: (4m23.800361382s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 start --wait true --alsologtostderr -v 5
E1122 00:23:58.473706  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-032009 start --wait true --alsologtostderr -v 5: (2m15.38914187s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (399.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-032009 node delete m03 --alsologtostderr -v 5: (17.908156656s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (244.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 stop --alsologtostderr -v 5
E1122 00:27:10.380157  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/functional-783762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:28:58.473713  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-032009 stop --alsologtostderr -v 5: (4m4.11027097s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-032009 status --alsologtostderr -v 5: exit status 7 (70.855391ms)

                                                
                                                
-- stdout --
	ha-032009
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-032009-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-032009-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1122 00:30:19.077975  269630 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:30:19.078131  269630 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:30:19.078141  269630 out.go:374] Setting ErrFile to fd 2...
	I1122 00:30:19.078145  269630 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:30:19.078373  269630 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-244751/.minikube/bin
	I1122 00:30:19.078583  269630 out.go:368] Setting JSON to false
	I1122 00:30:19.078620  269630 mustload.go:66] Loading cluster: ha-032009
	I1122 00:30:19.078691  269630 notify.go:221] Checking for updates...
	I1122 00:30:19.079054  269630 config.go:182] Loaded profile config "ha-032009": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:30:19.079072  269630 status.go:174] checking status of ha-032009 ...
	I1122 00:30:19.081528  269630 status.go:371] ha-032009 host status = "Stopped" (err=<nil>)
	I1122 00:30:19.081550  269630 status.go:384] host is not running, skipping remaining checks
	I1122 00:30:19.081557  269630 status.go:176] ha-032009 status: &{Name:ha-032009 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1122 00:30:19.081581  269630 status.go:174] checking status of ha-032009-m02 ...
	I1122 00:30:19.083097  269630 status.go:371] ha-032009-m02 host status = "Stopped" (err=<nil>)
	I1122 00:30:19.083116  269630 status.go:384] host is not running, skipping remaining checks
	I1122 00:30:19.083123  269630 status.go:176] ha-032009-m02 status: &{Name:ha-032009-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1122 00:30:19.083141  269630 status.go:174] checking status of ha-032009-m04 ...
	I1122 00:30:19.084501  269630 status.go:371] ha-032009-m04 host status = "Stopped" (err=<nil>)
	I1122 00:30:19.084519  269630 status.go:384] host is not running, skipping remaining checks
	I1122 00:30:19.084526  269630 status.go:176] ha-032009-m04 status: &{Name:ha-032009-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (244.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (126.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E1122 00:32:01.552751  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:32:10.380297  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/functional-783762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-032009 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (2m5.891387722s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (126.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (87.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 node add --control-plane --alsologtostderr -v 5
E1122 00:33:33.450242  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/functional-783762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-032009 node add --control-plane --alsologtostderr -v 5: (1m26.897991083s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-032009 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (87.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.73s)

                                                
                                    
x
+
TestJSONOutput/start/Command (80.89s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-113617 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
E1122 00:33:58.473372  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-113617 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m20.893967823s)
--- PASS: TestJSONOutput/start/Command (80.89s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.73s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-113617 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.73s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-113617 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-113617 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-113617 --output=json --user=testUser: (6.994812552s)
--- PASS: TestJSONOutput/stop/Command (7.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-954538 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-954538 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (84.825644ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f5377e54-178f-4641-805d-15b5e2136a4e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-954538] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"61488677-a626-40ec-b75a-7c0fdd3516de","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21934"}}
	{"specversion":"1.0","id":"b1538743-4a83-43f8-a084-5a5fc088fce1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"3b42042f-eee0-46d0-b306-9c44e88f78d0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21934-244751/kubeconfig"}}
	{"specversion":"1.0","id":"39557917-01eb-4387-bafa-e0aa0c4c385e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-244751/.minikube"}}
	{"specversion":"1.0","id":"ad996315-c9eb-46fb-bc67-2192185cc8f2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"70a457dc-b28e-40b7-934f-eb869584f187","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"001d2897-a265-4da9-85f4-7f305d76513e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-954538" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-954538
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (88.31s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-684880 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-684880 --driver=kvm2  --container-runtime=crio: (41.946881107s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-687923 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-687923 --driver=kvm2  --container-runtime=crio: (43.664589982s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-684880
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-687923
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-687923" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-687923
helpers_test.go:175: Cleaning up "first-684880" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-684880
--- PASS: TestMinikubeProfile (88.31s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (22.49s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-859633 --memory=3072 --mount-string /tmp/TestMountStartserial2113291783/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E1122 00:37:10.380423  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/functional-783762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-859633 --memory=3072 --mount-string /tmp/TestMountStartserial2113291783/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (21.490999981s)
--- PASS: TestMountStart/serial/StartWithMountFirst (22.49s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-859633 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-859633 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.32s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (20.89s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-880682 --memory=3072 --mount-string /tmp/TestMountStartserial2113291783/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-880682 --memory=3072 --mount-string /tmp/TestMountStartserial2113291783/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (19.892365846s)
--- PASS: TestMountStart/serial/StartWithMountSecond (20.89s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-880682 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-880682 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.32s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-859633 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-880682 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-880682 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.32s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.39s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-880682
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-880682: (1.386917999s)
--- PASS: TestMountStart/serial/Stop (1.39s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (20.88s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-880682
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-880682: (19.882741617s)
--- PASS: TestMountStart/serial/RestartStopped (20.88s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-880682 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-880682 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.30s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (130.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-267585 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1122 00:38:58.473346  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-267585 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m10.469112461s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-267585 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (130.83s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-267585 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-267585 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-267585 -- rollout status deployment/busybox: (3.752653824s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-267585 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-267585 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-267585 -- exec busybox-7b57f96db7-svnsl -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-267585 -- exec busybox-7b57f96db7-v5wks -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-267585 -- exec busybox-7b57f96db7-svnsl -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-267585 -- exec busybox-7b57f96db7-v5wks -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-267585 -- exec busybox-7b57f96db7-svnsl -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-267585 -- exec busybox-7b57f96db7-v5wks -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.39s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-267585 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-267585 -- exec busybox-7b57f96db7-svnsl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-267585 -- exec busybox-7b57f96db7-svnsl -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-267585 -- exec busybox-7b57f96db7-v5wks -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-267585 -- exec busybox-7b57f96db7-v5wks -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.88s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (43.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-267585 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-267585 -v=5 --alsologtostderr: (43.438733552s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-267585 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (43.90s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-267585 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.48s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-267585 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-267585 cp testdata/cp-test.txt multinode-267585:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-267585 ssh -n multinode-267585 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-267585 cp multinode-267585:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3222939821/001/cp-test_multinode-267585.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-267585 ssh -n multinode-267585 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-267585 cp multinode-267585:/home/docker/cp-test.txt multinode-267585-m02:/home/docker/cp-test_multinode-267585_multinode-267585-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-267585 ssh -n multinode-267585 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-267585 ssh -n multinode-267585-m02 "sudo cat /home/docker/cp-test_multinode-267585_multinode-267585-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-267585 cp multinode-267585:/home/docker/cp-test.txt multinode-267585-m03:/home/docker/cp-test_multinode-267585_multinode-267585-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-267585 ssh -n multinode-267585 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-267585 ssh -n multinode-267585-m03 "sudo cat /home/docker/cp-test_multinode-267585_multinode-267585-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-267585 cp testdata/cp-test.txt multinode-267585-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-267585 ssh -n multinode-267585-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-267585 cp multinode-267585-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3222939821/001/cp-test_multinode-267585-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-267585 ssh -n multinode-267585-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-267585 cp multinode-267585-m02:/home/docker/cp-test.txt multinode-267585:/home/docker/cp-test_multinode-267585-m02_multinode-267585.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-267585 ssh -n multinode-267585-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-267585 ssh -n multinode-267585 "sudo cat /home/docker/cp-test_multinode-267585-m02_multinode-267585.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-267585 cp multinode-267585-m02:/home/docker/cp-test.txt multinode-267585-m03:/home/docker/cp-test_multinode-267585-m02_multinode-267585-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-267585 ssh -n multinode-267585-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-267585 ssh -n multinode-267585-m03 "sudo cat /home/docker/cp-test_multinode-267585-m02_multinode-267585-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-267585 cp testdata/cp-test.txt multinode-267585-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-267585 ssh -n multinode-267585-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-267585 cp multinode-267585-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3222939821/001/cp-test_multinode-267585-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-267585 ssh -n multinode-267585-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-267585 cp multinode-267585-m03:/home/docker/cp-test.txt multinode-267585:/home/docker/cp-test_multinode-267585-m03_multinode-267585.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-267585 ssh -n multinode-267585-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-267585 ssh -n multinode-267585 "sudo cat /home/docker/cp-test_multinode-267585-m03_multinode-267585.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-267585 cp multinode-267585-m03:/home/docker/cp-test.txt multinode-267585-m02:/home/docker/cp-test_multinode-267585-m03_multinode-267585-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-267585 ssh -n multinode-267585-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-267585 ssh -n multinode-267585-m02 "sudo cat /home/docker/cp-test_multinode-267585-m03_multinode-267585-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.28s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-267585 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-267585 node stop m03: (1.607475397s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-267585 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-267585 status: exit status 7 (355.681136ms)

                                                
                                                
-- stdout --
	multinode-267585
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-267585-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-267585-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-267585 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-267585 status --alsologtostderr: exit status 7 (350.027068ms)

                                                
                                                
-- stdout --
	multinode-267585
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-267585-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-267585-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1122 00:41:14.493475  275420 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:41:14.493601  275420 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:41:14.493608  275420 out.go:374] Setting ErrFile to fd 2...
	I1122 00:41:14.493614  275420 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:41:14.493834  275420 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-244751/.minikube/bin
	I1122 00:41:14.494090  275420 out.go:368] Setting JSON to false
	I1122 00:41:14.494123  275420 mustload.go:66] Loading cluster: multinode-267585
	I1122 00:41:14.494238  275420 notify.go:221] Checking for updates...
	I1122 00:41:14.494642  275420 config.go:182] Loaded profile config "multinode-267585": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:41:14.494662  275420 status.go:174] checking status of multinode-267585 ...
	I1122 00:41:14.496773  275420 status.go:371] multinode-267585 host status = "Running" (err=<nil>)
	I1122 00:41:14.496790  275420 host.go:66] Checking if "multinode-267585" exists ...
	I1122 00:41:14.499885  275420 main.go:143] libmachine: domain multinode-267585 has defined MAC address 52:54:00:33:75:2a in network mk-multinode-267585
	I1122 00:41:14.500461  275420 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:33:75:2a", ip: ""} in network mk-multinode-267585: {Iface:virbr1 ExpiryTime:2025-11-22 01:38:20 +0000 UTC Type:0 Mac:52:54:00:33:75:2a Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-267585 Clientid:01:52:54:00:33:75:2a}
	I1122 00:41:14.500513  275420 main.go:143] libmachine: domain multinode-267585 has defined IP address 192.168.39.160 and MAC address 52:54:00:33:75:2a in network mk-multinode-267585
	I1122 00:41:14.500746  275420 host.go:66] Checking if "multinode-267585" exists ...
	I1122 00:41:14.501046  275420 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:41:14.503833  275420 main.go:143] libmachine: domain multinode-267585 has defined MAC address 52:54:00:33:75:2a in network mk-multinode-267585
	I1122 00:41:14.504346  275420 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:33:75:2a", ip: ""} in network mk-multinode-267585: {Iface:virbr1 ExpiryTime:2025-11-22 01:38:20 +0000 UTC Type:0 Mac:52:54:00:33:75:2a Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:multinode-267585 Clientid:01:52:54:00:33:75:2a}
	I1122 00:41:14.504383  275420 main.go:143] libmachine: domain multinode-267585 has defined IP address 192.168.39.160 and MAC address 52:54:00:33:75:2a in network mk-multinode-267585
	I1122 00:41:14.504565  275420 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/multinode-267585/id_rsa Username:docker}
	I1122 00:41:14.589939  275420 ssh_runner.go:195] Run: systemctl --version
	I1122 00:41:14.597311  275420 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:41:14.619248  275420 kubeconfig.go:125] found "multinode-267585" server: "https://192.168.39.160:8443"
	I1122 00:41:14.619295  275420 api_server.go:166] Checking apiserver status ...
	I1122 00:41:14.619333  275420 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:41:14.642163  275420 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1333/cgroup
	W1122 00:41:14.655211  275420 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1333/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1122 00:41:14.655299  275420 ssh_runner.go:195] Run: ls
	I1122 00:41:14.661758  275420 api_server.go:253] Checking apiserver healthz at https://192.168.39.160:8443/healthz ...
	I1122 00:41:14.666902  275420 api_server.go:279] https://192.168.39.160:8443/healthz returned 200:
	ok
	I1122 00:41:14.666929  275420 status.go:463] multinode-267585 apiserver status = Running (err=<nil>)
	I1122 00:41:14.666940  275420 status.go:176] multinode-267585 status: &{Name:multinode-267585 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1122 00:41:14.666959  275420 status.go:174] checking status of multinode-267585-m02 ...
	I1122 00:41:14.668949  275420 status.go:371] multinode-267585-m02 host status = "Running" (err=<nil>)
	I1122 00:41:14.668977  275420 host.go:66] Checking if "multinode-267585-m02" exists ...
	I1122 00:41:14.672212  275420 main.go:143] libmachine: domain multinode-267585-m02 has defined MAC address 52:54:00:3e:3c:23 in network mk-multinode-267585
	I1122 00:41:14.672805  275420 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3e:3c:23", ip: ""} in network mk-multinode-267585: {Iface:virbr1 ExpiryTime:2025-11-22 01:39:44 +0000 UTC Type:0 Mac:52:54:00:3e:3c:23 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-267585-m02 Clientid:01:52:54:00:3e:3c:23}
	I1122 00:41:14.672833  275420 main.go:143] libmachine: domain multinode-267585-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:3e:3c:23 in network mk-multinode-267585
	I1122 00:41:14.673049  275420 host.go:66] Checking if "multinode-267585-m02" exists ...
	I1122 00:41:14.673333  275420 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:41:14.675967  275420 main.go:143] libmachine: domain multinode-267585-m02 has defined MAC address 52:54:00:3e:3c:23 in network mk-multinode-267585
	I1122 00:41:14.676580  275420 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3e:3c:23", ip: ""} in network mk-multinode-267585: {Iface:virbr1 ExpiryTime:2025-11-22 01:39:44 +0000 UTC Type:0 Mac:52:54:00:3e:3c:23 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-267585-m02 Clientid:01:52:54:00:3e:3c:23}
	I1122 00:41:14.676611  275420 main.go:143] libmachine: domain multinode-267585-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:3e:3c:23 in network mk-multinode-267585
	I1122 00:41:14.676825  275420 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21934-244751/.minikube/machines/multinode-267585-m02/id_rsa Username:docker}
	I1122 00:41:14.760697  275420 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:41:14.778360  275420 status.go:176] multinode-267585-m02 status: &{Name:multinode-267585-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1122 00:41:14.778402  275420 status.go:174] checking status of multinode-267585-m03 ...
	I1122 00:41:14.780195  275420 status.go:371] multinode-267585-m03 host status = "Stopped" (err=<nil>)
	I1122 00:41:14.780217  275420 status.go:384] host is not running, skipping remaining checks
	I1122 00:41:14.780222  275420 status.go:176] multinode-267585-m03 status: &{Name:multinode-267585-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.31s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (45.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-267585 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-267585 node start m03 -v=5 --alsologtostderr: (45.228550969s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-267585 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (45.74s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (300.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-267585
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-267585
E1122 00:42:10.380059  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/functional-783762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:43:58.480249  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-267585: (2m52.647143125s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-267585 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-267585 --wait=true -v=5 --alsologtostderr: (2m8.10338786s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-267585
--- PASS: TestMultiNode/serial/RestartKeepsNodes (300.89s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-267585 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-267585 node delete m03: (2.204586191s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-267585 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.68s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (175.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-267585 stop
E1122 00:47:10.379645  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/functional-783762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:48:41.555055  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:48:58.480834  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-267585 stop: (2m55.513962242s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-267585 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-267585 status: exit status 7 (68.876888ms)

                                                
                                                
-- stdout --
	multinode-267585
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-267585-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-267585 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-267585 status --alsologtostderr: exit status 7 (66.089143ms)

                                                
                                                
-- stdout --
	multinode-267585
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-267585-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1122 00:49:59.737801  277865 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:49:59.738068  277865 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:49:59.738077  277865 out.go:374] Setting ErrFile to fd 2...
	I1122 00:49:59.738081  277865 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:49:59.738271  277865 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-244751/.minikube/bin
	I1122 00:49:59.738446  277865 out.go:368] Setting JSON to false
	I1122 00:49:59.738478  277865 mustload.go:66] Loading cluster: multinode-267585
	I1122 00:49:59.738660  277865 notify.go:221] Checking for updates...
	I1122 00:49:59.738889  277865 config.go:182] Loaded profile config "multinode-267585": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:49:59.738907  277865 status.go:174] checking status of multinode-267585 ...
	I1122 00:49:59.741261  277865 status.go:371] multinode-267585 host status = "Stopped" (err=<nil>)
	I1122 00:49:59.741276  277865 status.go:384] host is not running, skipping remaining checks
	I1122 00:49:59.741281  277865 status.go:176] multinode-267585 status: &{Name:multinode-267585 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1122 00:49:59.741298  277865 status.go:174] checking status of multinode-267585-m02 ...
	I1122 00:49:59.742723  277865 status.go:371] multinode-267585-m02 host status = "Stopped" (err=<nil>)
	I1122 00:49:59.742743  277865 status.go:384] host is not running, skipping remaining checks
	I1122 00:49:59.742751  277865 status.go:176] multinode-267585-m02 status: &{Name:multinode-267585-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (175.65s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (87.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-267585 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1122 00:50:13.452249  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/functional-783762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-267585 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m27.107622738s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-267585 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (87.61s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (42.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-267585
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-267585-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-267585-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (85.099782ms)

                                                
                                                
-- stdout --
	* [multinode-267585-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21934
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21934-244751/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-244751/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-267585-m02' is duplicated with machine name 'multinode-267585-m02' in profile 'multinode-267585'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-267585-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-267585-m03 --driver=kvm2  --container-runtime=crio: (41.10381813s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-267585
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-267585: exit status 80 (223.527095ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-267585 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-267585-m03 already exists in multinode-267585-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-267585-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (42.34s)

                                                
                                    
x
+
TestScheduledStopUnix (114.4s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-993828 --memory=3072 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-993828 --memory=3072 --driver=kvm2  --container-runtime=crio: (42.65265424s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-993828 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1122 00:55:35.693157  280275 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:55:35.693444  280275 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:55:35.693456  280275 out.go:374] Setting ErrFile to fd 2...
	I1122 00:55:35.693460  280275 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:55:35.693652  280275 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-244751/.minikube/bin
	I1122 00:55:35.693930  280275 out.go:368] Setting JSON to false
	I1122 00:55:35.694020  280275 mustload.go:66] Loading cluster: scheduled-stop-993828
	I1122 00:55:35.694326  280275 config.go:182] Loaded profile config "scheduled-stop-993828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:55:35.694403  280275 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/scheduled-stop-993828/config.json ...
	I1122 00:55:35.694583  280275 mustload.go:66] Loading cluster: scheduled-stop-993828
	I1122 00:55:35.694710  280275 config.go:182] Loaded profile config "scheduled-stop-993828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-993828 -n scheduled-stop-993828
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-993828 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1122 00:55:36.010880  280320 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:55:36.011013  280320 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:55:36.011021  280320 out.go:374] Setting ErrFile to fd 2...
	I1122 00:55:36.011032  280320 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:55:36.011206  280320 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-244751/.minikube/bin
	I1122 00:55:36.011454  280320 out.go:368] Setting JSON to false
	I1122 00:55:36.011662  280320 daemonize_unix.go:73] killing process 280309 as it is an old scheduled stop
	I1122 00:55:36.011795  280320 mustload.go:66] Loading cluster: scheduled-stop-993828
	I1122 00:55:36.012133  280320 config.go:182] Loaded profile config "scheduled-stop-993828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:55:36.012211  280320 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/scheduled-stop-993828/config.json ...
	I1122 00:55:36.012386  280320 mustload.go:66] Loading cluster: scheduled-stop-993828
	I1122 00:55:36.012485  280320 config.go:182] Loaded profile config "scheduled-stop-993828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1122 00:55:36.018670  250664 retry.go:31] will retry after 79.846µs: open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/scheduled-stop-993828/pid: no such file or directory
I1122 00:55:36.019927  250664 retry.go:31] will retry after 217.304µs: open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/scheduled-stop-993828/pid: no such file or directory
I1122 00:55:36.021127  250664 retry.go:31] will retry after 198.526µs: open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/scheduled-stop-993828/pid: no such file or directory
I1122 00:55:36.022339  250664 retry.go:31] will retry after 270.177µs: open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/scheduled-stop-993828/pid: no such file or directory
I1122 00:55:36.023500  250664 retry.go:31] will retry after 425.124µs: open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/scheduled-stop-993828/pid: no such file or directory
I1122 00:55:36.024636  250664 retry.go:31] will retry after 467.581µs: open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/scheduled-stop-993828/pid: no such file or directory
I1122 00:55:36.025786  250664 retry.go:31] will retry after 1.337954ms: open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/scheduled-stop-993828/pid: no such file or directory
I1122 00:55:36.028017  250664 retry.go:31] will retry after 1.074833ms: open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/scheduled-stop-993828/pid: no such file or directory
I1122 00:55:36.029179  250664 retry.go:31] will retry after 1.811785ms: open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/scheduled-stop-993828/pid: no such file or directory
I1122 00:55:36.031426  250664 retry.go:31] will retry after 2.221669ms: open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/scheduled-stop-993828/pid: no such file or directory
I1122 00:55:36.034659  250664 retry.go:31] will retry after 6.490097ms: open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/scheduled-stop-993828/pid: no such file or directory
I1122 00:55:36.041875  250664 retry.go:31] will retry after 8.915084ms: open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/scheduled-stop-993828/pid: no such file or directory
I1122 00:55:36.051321  250664 retry.go:31] will retry after 7.973826ms: open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/scheduled-stop-993828/pid: no such file or directory
I1122 00:55:36.059588  250664 retry.go:31] will retry after 16.742795ms: open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/scheduled-stop-993828/pid: no such file or directory
I1122 00:55:36.076863  250664 retry.go:31] will retry after 22.170268ms: open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/scheduled-stop-993828/pid: no such file or directory
I1122 00:55:36.100140  250664 retry.go:31] will retry after 63.771171ms: open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/scheduled-stop-993828/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-993828 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-993828 -n scheduled-stop-993828
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-993828
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-993828 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1122 00:56:01.768729  280469 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:56:01.769030  280469 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:56:01.769040  280469 out.go:374] Setting ErrFile to fd 2...
	I1122 00:56:01.769044  280469 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:56:01.769254  280469 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-244751/.minikube/bin
	I1122 00:56:01.769534  280469 out.go:368] Setting JSON to false
	I1122 00:56:01.769614  280469 mustload.go:66] Loading cluster: scheduled-stop-993828
	I1122 00:56:01.769961  280469 config.go:182] Loaded profile config "scheduled-stop-993828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:56:01.770041  280469 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/scheduled-stop-993828/config.json ...
	I1122 00:56:01.770235  280469 mustload.go:66] Loading cluster: scheduled-stop-993828
	I1122 00:56:01.770335  280469 config.go:182] Loaded profile config "scheduled-stop-993828": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-993828
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-993828: exit status 7 (66.294518ms)

                                                
                                                
-- stdout --
	scheduled-stop-993828
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-993828 -n scheduled-stop-993828
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-993828 -n scheduled-stop-993828: exit status 7 (63.752137ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-993828" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-993828
--- PASS: TestScheduledStopUnix (114.40s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (137.22s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.640430630 start -p running-upgrade-702170 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.640430630 start -p running-upgrade-702170 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m10.113837332s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-702170 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-702170 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m5.63337339s)
helpers_test.go:175: Cleaning up "running-upgrade-702170" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-702170
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-702170: (1.025230118s)
--- PASS: TestRunningBinaryUpgrade (137.22s)

                                                
                                    
x
+
TestKubernetesUpgrade (207.06s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-450435 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1122 00:57:10.379372  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/functional-783762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-450435 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m34.932822656s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-450435
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-450435: (2.361647818s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-450435 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-450435 status --format={{.Host}}: exit status 7 (87.749864ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-450435 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-450435 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (58.595399745s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-450435 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-450435 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-450435 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 106 (87.188921ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-450435] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21934
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21934-244751/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-244751/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-450435
	    minikube start -p kubernetes-upgrade-450435 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4504352 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-450435 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-450435 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-450435 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (49.999081791s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-450435" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-450435
--- PASS: TestKubernetesUpgrade (207.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-061445 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-061445 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 14 (107.662243ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-061445] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21934
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21934-244751/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-244751/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (87.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-061445 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-061445 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m27.024885384s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-061445 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (87.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-842088 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-842088 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (129.386431ms)

                                                
                                                
-- stdout --
	* [false-842088] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21934
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21934-244751/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-244751/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1122 00:56:50.992300  281552 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:56:50.992556  281552 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:56:50.992566  281552 out.go:374] Setting ErrFile to fd 2...
	I1122 00:56:50.992570  281552 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:56:50.992767  281552 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-244751/.minikube/bin
	I1122 00:56:50.993251  281552 out.go:368] Setting JSON to false
	I1122 00:56:50.994100  281552 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":31139,"bootTime":1763741872,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1122 00:56:50.994159  281552 start.go:143] virtualization: kvm guest
	I1122 00:56:50.996026  281552 out.go:179] * [false-842088] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1122 00:56:50.997313  281552 out.go:179]   - MINIKUBE_LOCATION=21934
	I1122 00:56:50.997332  281552 notify.go:221] Checking for updates...
	I1122 00:56:50.999714  281552 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 00:56:51.001047  281552 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-244751/kubeconfig
	I1122 00:56:51.002411  281552 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-244751/.minikube
	I1122 00:56:51.003650  281552 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1122 00:56:51.005018  281552 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 00:56:51.006671  281552 config.go:182] Loaded profile config "NoKubernetes-061445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:56:51.006782  281552 config.go:182] Loaded profile config "force-systemd-env-073043": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:56:51.006877  281552 config.go:182] Loaded profile config "offline-crio-950982": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:56:51.006980  281552 driver.go:422] Setting default libvirt URI to qemu:///system
	I1122 00:56:51.043935  281552 out.go:179] * Using the kvm2 driver based on user configuration
	I1122 00:56:51.045336  281552 start.go:309] selected driver: kvm2
	I1122 00:56:51.045359  281552 start.go:930] validating driver "kvm2" against <nil>
	I1122 00:56:51.045377  281552 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 00:56:51.047559  281552 out.go:203] 
	W1122 00:56:51.048810  281552 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1122 00:56:51.049925  281552 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-842088 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-842088

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-842088

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-842088

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-842088

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-842088

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-842088

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-842088

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-842088

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-842088

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-842088

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-842088"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-842088"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-842088"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-842088

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-842088"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-842088"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-842088" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-842088" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-842088" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-842088" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-842088" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-842088" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-842088" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-842088" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-842088"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-842088"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-842088"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-842088"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-842088"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-842088" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-842088" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-842088" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-842088"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-842088"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-842088"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-842088"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-842088"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-842088

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-842088"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-842088"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-842088"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-842088"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-842088"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-842088"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-842088"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-842088"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-842088"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-842088"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-842088"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-842088"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-842088"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-842088"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-842088"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-842088"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-842088"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-842088"

                                                
                                                
----------------------- debugLogs end: false-842088 [took: 3.320647291s] --------------------------------
helpers_test.go:175: Cleaning up "false-842088" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-842088
--- PASS: TestNetworkPlugins/group/false (3.61s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.49s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.49s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (143.54s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.998683690 start -p stopped-upgrade-504824 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.998683690 start -p stopped-upgrade-504824 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m23.964152845s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.998683690 -p stopped-upgrade-504824 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.998683690 -p stopped-upgrade-504824 stop: (2.42866888s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-504824 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1122 00:58:58.474052  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-504824 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (57.142105391s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (143.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (47.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-061445 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-061445 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (46.351849735s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-061445 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-061445 status -o json: exit status 2 (233.390411ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-061445","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-061445
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (47.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (55.66s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-061445 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-061445 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (55.660278151s)
--- PASS: TestNoKubernetes/serial/Start (55.66s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.14s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-504824
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-504824: (1.139436731s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.14s)

                                                
                                    
x
+
TestPause/serial/Start (73.99s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-061914 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-061914 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m13.987306786s)
--- PASS: TestPause/serial/Start (73.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/21934-244751/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-061445 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-061445 "sudo systemctl is-active --quiet service kubelet": exit status 1 (162.668105ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.99s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-061445
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-061445: (1.365693202s)
--- PASS: TestNoKubernetes/serial/Stop (1.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (50.74s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-061445 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-061445 --driver=kvm2  --container-runtime=crio: (50.743082872s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (50.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-061445 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-061445 "sudo systemctl is-active --quiet service kubelet": exit status 1 (164.782823ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.16s)

                                                
                                    
x
+
TestISOImage/Setup (62.42s)

                                                
                                                
=== RUN   TestISOImage/Setup
iso_test.go:47: (dbg) Run:  out/minikube-linux-amd64 start -p guest-688997 --no-kubernetes --driver=kvm2  --container-runtime=crio
iso_test.go:47: (dbg) Done: out/minikube-linux-amd64 start -p guest-688997 --no-kubernetes --driver=kvm2  --container-runtime=crio: (1m2.420442527s)
--- PASS: TestISOImage/Setup (62.42s)

                                                
                                    
x
+
TestISOImage/Binaries/crictl (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/crictl
=== PAUSE TestISOImage/Binaries/crictl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/crictl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-688997 ssh "which crictl"
--- PASS: TestISOImage/Binaries/crictl (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/curl (0.22s)

                                                
                                                
=== RUN   TestISOImage/Binaries/curl
=== PAUSE TestISOImage/Binaries/curl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/curl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-688997 ssh "which curl"
--- PASS: TestISOImage/Binaries/curl (0.22s)

                                                
                                    
x
+
TestISOImage/Binaries/docker (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/docker
=== PAUSE TestISOImage/Binaries/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/docker
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-688997 ssh "which docker"
--- PASS: TestISOImage/Binaries/docker (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/git (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/git
=== PAUSE TestISOImage/Binaries/git

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/git
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-688997 ssh "which git"
--- PASS: TestISOImage/Binaries/git (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/iptables (0.21s)

                                                
                                                
=== RUN   TestISOImage/Binaries/iptables
=== PAUSE TestISOImage/Binaries/iptables

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/iptables
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-688997 ssh "which iptables"
--- PASS: TestISOImage/Binaries/iptables (0.21s)

                                                
                                    
x
+
TestISOImage/Binaries/podman (0.21s)

                                                
                                                
=== RUN   TestISOImage/Binaries/podman
=== PAUSE TestISOImage/Binaries/podman

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/podman
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-688997 ssh "which podman"
--- PASS: TestISOImage/Binaries/podman (0.21s)

                                                
                                    
x
+
TestISOImage/Binaries/rsync (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/rsync
=== PAUSE TestISOImage/Binaries/rsync

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/rsync
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-688997 ssh "which rsync"
--- PASS: TestISOImage/Binaries/rsync (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/socat (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/socat
=== PAUSE TestISOImage/Binaries/socat

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/socat
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-688997 ssh "which socat"
--- PASS: TestISOImage/Binaries/socat (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/wget (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/wget
=== PAUSE TestISOImage/Binaries/wget

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/wget
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-688997 ssh "which wget"
E1122 01:10:32.646118  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/enable-default-cni-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestISOImage/Binaries/wget (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxControl (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxControl
=== PAUSE TestISOImage/Binaries/VBoxControl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxControl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-688997 ssh "which VBoxControl"
--- PASS: TestISOImage/Binaries/VBoxControl (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxService (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxService
=== PAUSE TestISOImage/Binaries/VBoxService

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxService
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-688997 ssh "which VBoxService"
--- PASS: TestISOImage/Binaries/VBoxService (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (106.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-842088 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
E1122 01:02:10.380946  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/functional-783762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-842088 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m46.724975852s)
--- PASS: TestNetworkPlugins/group/auto/Start (106.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (60.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-842088 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-842088 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m0.709879581s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (60.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (86.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-842088 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-842088 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m26.844020761s)
--- PASS: TestNetworkPlugins/group/flannel/Start (86.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-842088 "pgrep -a kubelet"
I1122 01:03:41.486448  250664 config.go:182] Loaded profile config "auto-842088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-842088 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-wqwn6" [63fca2d7-7810-45e6-82b9-2d92937b0cd6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-wqwn6" [63fca2d7-7810-45e6-82b9-2d92937b0cd6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004769111s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-jfrwt" [5b60d7a7-2465-49bc-b176-e3a7e7f9cca7] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.007011469s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-842088 "pgrep -a kubelet"
I1122 01:03:48.990181  250664 config.go:182] Loaded profile config "kindnet-842088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-842088 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-8snqs" [591ed618-6848-4bb5-a4a4-3150406073a7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-8snqs" [591ed618-6848-4bb5-a4a4-3150406073a7] Running
E1122 01:03:58.473839  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.006000037s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-842088 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-842088 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-842088 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-842088 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-842088 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-842088 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (82.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-842088 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-842088 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m22.723127702s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (82.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-ftf2j" [0b4fdb9a-7ac9-4795-8f66-42d4a9d49d93] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005075346s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (89.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-842088 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-842088 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m29.195161086s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (89.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-842088 "pgrep -a kubelet"
I1122 01:04:19.989891  250664 config.go:182] Loaded profile config "flannel-842088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-842088 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-842w2" [ceffd9de-3bf5-4117-9379-22c067514bd3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-842w2" [ceffd9de-3bf5-4117-9379-22c067514bd3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004531042s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-842088 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-842088 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-842088 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (96.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-842088 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-842088 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m36.863712934s)
--- PASS: TestNetworkPlugins/group/bridge/Start (96.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (101.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-842088 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
E1122 01:05:21.557415  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-842088 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m41.937324938s)
--- PASS: TestNetworkPlugins/group/calico/Start (101.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-842088 "pgrep -a kubelet"
I1122 01:05:29.817759  250664 config.go:182] Loaded profile config "enable-default-cni-842088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-842088 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-bt8tc" [348773e2-ace5-4c54-bca3-966bbefbe318] Pending
helpers_test.go:352: "netcat-cd4db9dbf-bt8tc" [348773e2-ace5-4c54-bca3-966bbefbe318] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 13.004176635s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-842088 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-842088 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-842088 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-842088 "pgrep -a kubelet"
I1122 01:05:44.014112  250664 config.go:182] Loaded profile config "custom-flannel-842088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-842088 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-bsvp7" [6dbf83d7-3d25-444e-be69-a5ff8dde8e31] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-bsvp7" [6dbf83d7-3d25-444e-be69-a5ff8dde8e31] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.01093115s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-842088 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-842088 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-842088 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (96.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-930338 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-930338 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (1m36.029514099s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (96.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (86.79s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-108330 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-108330 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m26.789707024s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (86.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-842088 "pgrep -a kubelet"
I1122 01:06:24.647918  250664 config.go:182] Loaded profile config "bridge-842088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (13.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-842088 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-4gscx" [4efea189-c959-4321-a8a5-9b43e0445ffa] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-4gscx" [4efea189-c959-4321-a8a5-9b43e0445ffa] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 13.006072734s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (13.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-842088 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-842088 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-842088 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (88.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-818975 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-818975 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m28.239068236s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (88.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-9c8x5" [57a3aaf7-49f5-4f5a-a962-f4d8d6ec21a9] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00452985s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-842088 "pgrep -a kubelet"
I1122 01:07:02.108403  250664 config.go:182] Loaded profile config "calico-842088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (14.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-842088 replace --force -f testdata/netcat-deployment.yaml
I1122 01:07:03.114115  250664 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-52p2m" [b2252362-c677-4e42-b447-ddd05ab6b5ce] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-52p2m" [b2252362-c677-4e42-b447-ddd05ab6b5ce] Running
E1122 01:07:10.379735  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/functional-783762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.006020773s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (14.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-842088 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-842088 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-842088 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (85.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-842337 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-842337 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m25.350278161s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (85.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-930338 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [429efd53-062e-46b3-a163-909d3fb50163] Pending
helpers_test.go:352: "busybox" [429efd53-062e-46b3-a163-909d3fb50163] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [429efd53-062e-46b3-a163-909d3fb50163] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.006119631s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-930338 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-108330 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [889bd7b4-7ef5-41e2-bccc-6c595ab39295] Pending
helpers_test.go:352: "busybox" [889bd7b4-7ef5-41e2-bccc-6c595ab39295] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [889bd7b4-7ef5-41e2-bccc-6c595ab39295] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.005845983s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-108330 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-930338 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-930338 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.447698244s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-930338 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (82.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-930338 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-930338 --alsologtostderr -v=3: (1m22.393026936s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (82.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-108330 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-108330 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.090981848s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-108330 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (73.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-108330 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-108330 --alsologtostderr -v=3: (1m13.162113399s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (73.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-818975 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [205be5ad-1a83-4cf0-a941-6ba0f0e1ae47] Pending
helpers_test.go:352: "busybox" [205be5ad-1a83-4cf0-a941-6ba0f0e1ae47] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [205be5ad-1a83-4cf0-a941-6ba0f0e1ae47] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004552576s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-818975 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-818975 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-818975 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (81.51s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-818975 --alsologtostderr -v=3
E1122 01:08:41.753332  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/auto-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:08:41.759807  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/auto-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:08:41.771256  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/auto-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:08:41.792670  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/auto-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:08:41.834198  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/auto-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:08:41.915775  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/auto-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:08:42.077391  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/auto-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:08:42.399198  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/auto-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:08:42.802877  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/kindnet-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:08:42.809308  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/kindnet-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:08:42.820777  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/kindnet-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:08:42.843022  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/kindnet-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:08:42.884526  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/kindnet-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:08:42.966114  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/kindnet-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:08:43.040612  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/auto-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:08:43.128159  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/kindnet-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:08:43.449942  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/kindnet-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:08:44.092221  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/kindnet-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:08:44.322047  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/auto-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:08:45.373698  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/kindnet-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:08:46.884323  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/auto-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:08:47.935753  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/kindnet-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:08:52.006763  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/auto-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:08:53.057603  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/kindnet-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:08:58.472990  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/addons-266876/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-818975 --alsologtostderr -v=3: (1m21.51435398s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (81.51s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-842337 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [c58803a7-1f1a-4176-b812-b219cfdd31a2] Pending
helpers_test.go:352: "busybox" [c58803a7-1f1a-4176-b812-b219cfdd31a2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1122 01:09:02.248794  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/auto-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [c58803a7-1f1a-4176-b812-b219cfdd31a2] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.005546834s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-842337 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-108330 -n no-preload-108330
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-108330 -n no-preload-108330: exit status 7 (71.935756ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-108330 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (59.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-108330 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
E1122 01:09:03.299107  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/kindnet-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-108330 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (58.886356024s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-108330 -n no-preload-108330
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (59.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-842337 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-842337 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.010951704s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-842337 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (87.85s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-842337 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-842337 --alsologtostderr -v=3: (1m27.853701438s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (87.85s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-930338 -n old-k8s-version-930338
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-930338 -n old-k8s-version-930338: exit status 7 (70.682534ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-930338 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (58.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-930338 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
E1122 01:09:13.784856  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/flannel-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:09:13.791352  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/flannel-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:09:13.802850  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/flannel-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:09:13.824392  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/flannel-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:09:13.865965  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/flannel-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:09:13.947464  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/flannel-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:09:14.109389  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/flannel-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:09:14.431389  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/flannel-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:09:15.073275  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/flannel-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:09:16.355482  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/flannel-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:09:18.916896  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/flannel-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:09:22.730098  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/auto-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:09:23.780692  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/kindnet-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:09:24.038517  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/flannel-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:09:34.280415  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/flannel-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:09:54.762261  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/flannel-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-930338 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (58.374320536s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-930338 -n old-k8s-version-930338
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (58.69s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-818975 -n embed-certs-818975
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-818975 -n embed-certs-818975: exit status 7 (79.699813ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-818975 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (50.6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-818975 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-818975 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (50.192907573s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-818975 -n embed-certs-818975
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (50.60s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-rpg5k" [44cb7359-30d8-494e-baab-4bc0fb74395b] Running
E1122 01:10:03.692311  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/auto-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:10:04.742726  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/kindnet-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004925538s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-rpg5k" [44cb7359-30d8-494e-baab-4bc0fb74395b] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004101478s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-108330 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (13.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-n79j9" [0faaea90-e3cf-4e9e-a87e-4d059c985d2f] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-n79j9" [0faaea90-e3cf-4e9e-a87e-4d059c985d2f] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.006381467s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (13.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-108330 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-108330 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p no-preload-108330 --alsologtostderr -v=1: (1.00962566s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-108330 -n no-preload-108330
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-108330 -n no-preload-108330: exit status 2 (252.250503ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-108330 -n no-preload-108330
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-108330 -n no-preload-108330: exit status 2 (247.321563ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-108330 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p no-preload-108330 --alsologtostderr -v=1: (1.09724559s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-108330 -n no-preload-108330
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-108330 -n no-preload-108330
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (50.93s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-720539 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-720539 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (50.927281815s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (50.93s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-n79j9" [0faaea90-e3cf-4e9e-a87e-4d059c985d2f] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004718094s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-930338 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-930338 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-930338 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-930338 -n old-k8s-version-930338
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-930338 -n old-k8s-version-930338: exit status 2 (247.659781ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-930338 -n old-k8s-version-930338
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-930338 -n old-k8s-version-930338: exit status 2 (265.639177ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-930338 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p old-k8s-version-930338 --alsologtostderr -v=1: (1.012906998s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-930338 -n old-k8s-version-930338
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-930338 -n old-k8s-version-930338
E1122 01:10:30.074311  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/enable-default-cni-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:10:30.080827  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/enable-default-cni-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:10:30.093902  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/enable-default-cni-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:10:30.115362  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/enable-default-cni-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:10:30.157202  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/enable-default-cni-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:10:30.238804  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/enable-default-cni-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.27s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//data (0.21s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//data
=== PAUSE TestISOImage/PersistentMounts//data

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//data
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-688997 ssh "df -t ext4 /data | grep /data"
--- PASS: TestISOImage/PersistentMounts//data (0.21s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/docker (0.19s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-688997 ssh "df -t ext4 /var/lib/docker | grep /var/lib/docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/docker (0.19s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/cni (0.19s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/cni
=== PAUSE TestISOImage/PersistentMounts//var/lib/cni

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/cni
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-688997 ssh "df -t ext4 /var/lib/cni | grep /var/lib/cni"
--- PASS: TestISOImage/PersistentMounts//var/lib/cni (0.19s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/kubelet (0.17s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/kubelet
=== PAUSE TestISOImage/PersistentMounts//var/lib/kubelet

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/kubelet
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-688997 ssh "df -t ext4 /var/lib/kubelet | grep /var/lib/kubelet"
--- PASS: TestISOImage/PersistentMounts//var/lib/kubelet (0.17s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/minikube (0.19s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/minikube
=== PAUSE TestISOImage/PersistentMounts//var/lib/minikube

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/minikube
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-688997 ssh "df -t ext4 /var/lib/minikube | grep /var/lib/minikube"
--- PASS: TestISOImage/PersistentMounts//var/lib/minikube (0.19s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/toolbox (0.19s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/toolbox
=== PAUSE TestISOImage/PersistentMounts//var/lib/toolbox

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/toolbox
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-688997 ssh "df -t ext4 /var/lib/toolbox | grep /var/lib/toolbox"
--- PASS: TestISOImage/PersistentMounts//var/lib/toolbox (0.19s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/boot2docker (0.19s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/boot2docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/boot2docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/boot2docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-688997 ssh "df -t ext4 /var/lib/boot2docker | grep /var/lib/boot2docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/boot2docker (0.19s)

                                                
                                    
x
+
TestISOImage/VersionJSON (0.19s)

                                                
                                                
=== RUN   TestISOImage/VersionJSON
iso_test.go:106: (dbg) Run:  out/minikube-linux-amd64 -p guest-688997 ssh "cat /version.json"
E1122 01:10:35.207837  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/enable-default-cni-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
iso_test.go:116: Successfully parsed /version.json:
iso_test.go:118:   iso_version: v1.37.0-1763503576-21924
iso_test.go:118:   kicbase_version: v0.0.48-1761985721-21837
iso_test.go:118:   minikube_version: v1.37.0
iso_test.go:118:   commit: fae26615d717024600f131fc4fa68f9450a9ef29
--- PASS: TestISOImage/VersionJSON (0.19s)

                                                
                                    
x
+
TestISOImage/eBPFSupport (0.19s)

                                                
                                                
=== RUN   TestISOImage/eBPFSupport
iso_test.go:125: (dbg) Run:  out/minikube-linux-amd64 -p guest-688997 ssh "test -f /sys/kernel/btf/vmlinux && echo 'OK' || echo 'NOT FOUND'"
--- PASS: TestISOImage/eBPFSupport (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-842337 -n default-k8s-diff-port-842337
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-842337 -n default-k8s-diff-port-842337: exit status 7 (68.340187ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-842337 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (51.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-842337 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
E1122 01:10:40.330141  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/enable-default-cni-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:10:44.246778  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/custom-flannel-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:10:44.253276  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/custom-flannel-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:10:44.264867  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/custom-flannel-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:10:44.286813  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/custom-flannel-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:10:44.328289  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/custom-flannel-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:10:44.409967  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/custom-flannel-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:10:44.573801  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/custom-flannel-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:10:44.895427  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/custom-flannel-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:10:45.537422  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/custom-flannel-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-842337 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (51.164543541s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-842337 -n default-k8s-diff-port-842337
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (51.46s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-b4vr8" [93737b20-7d6c-42f4-9aff-43152dc7e998] Running
E1122 01:10:46.819320  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/custom-flannel-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:10:49.380702  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/custom-flannel-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:10:50.571816  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/enable-default-cni-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004590756s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-b4vr8" [93737b20-7d6c-42f4-9aff-43152dc7e998] Running
E1122 01:10:54.502111  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/custom-flannel-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004621353s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-818975 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-818975 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-818975 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-818975 -n embed-certs-818975
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-818975 -n embed-certs-818975: exit status 2 (235.907474ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-818975 -n embed-certs-818975
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-818975 -n embed-certs-818975: exit status 2 (242.638358ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-818975 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-818975 -n embed-certs-818975
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-818975 -n embed-certs-818975
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.96s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-720539 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-720539 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.283870754s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.83s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-720539 --alsologtostderr -v=3
E1122 01:11:11.053941  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/enable-default-cni-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-720539 --alsologtostderr -v=3: (7.830254195s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.83s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-720539 -n newest-cni-720539
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-720539 -n newest-cni-720539: exit status 7 (80.278059ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-720539 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (36.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-720539 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
E1122 01:11:25.020120  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/bridge-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:11:25.026668  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/bridge-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:11:25.038392  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/bridge-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:11:25.060533  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/bridge-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:11:25.102063  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/bridge-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:11:25.183836  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/bridge-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:11:25.226397  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/custom-flannel-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:11:25.345974  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/bridge-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:11:25.613630  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/auto-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:11:25.667319  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/bridge-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:11:26.309777  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/bridge-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:11:26.664641  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/kindnet-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:11:27.591696  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/bridge-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-720539 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (35.885872296s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-720539 -n newest-cni-720539
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (36.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-pcgsx" [ba21104d-ffc0-4ea6-875e-09dd4ace764c] Running
E1122 01:11:30.153662  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/bridge-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005309972s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-pcgsx" [ba21104d-ffc0-4ea6-875e-09dd4ace764c] Running
E1122 01:11:35.275324  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/bridge-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005370457s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-842337 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-842337 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-842337 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-842337 -n default-k8s-diff-port-842337
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-842337 -n default-k8s-diff-port-842337: exit status 2 (245.885804ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-842337 -n default-k8s-diff-port-842337
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-842337 -n default-k8s-diff-port-842337: exit status 2 (229.832334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-842337 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-842337 -n default-k8s-diff-port-842337
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-842337 -n default-k8s-diff-port-842337
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-720539 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-720539 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-720539 -n newest-cni-720539
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-720539 -n newest-cni-720539: exit status 2 (299.626916ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-720539 -n newest-cni-720539
E1122 01:11:55.838357  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/calico-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:11:55.844817  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/calico-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:11:55.856310  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/calico-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:11:55.877808  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/calico-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:11:55.919308  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/calico-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:11:56.000840  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/calico-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-720539 -n newest-cni-720539: exit status 2 (276.439967ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-720539 --alsologtostderr -v=1
E1122 01:11:56.162840  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/calico-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:11:56.484597  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/calico-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p newest-cni-720539 --alsologtostderr -v=1: (1.03335518s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-720539 -n newest-cni-720539
E1122 01:11:57.126257  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/calico-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-720539 -n newest-cni-720539
E1122 01:11:57.645800  250664 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-244751/.minikube/profiles/flannel-842088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.21s)

                                                
                                    

Test skip (40/345)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.1/cached-images 0
15 TestDownloadOnly/v1.34.1/binaries 0
16 TestDownloadOnly/v1.34.1/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.31
33 TestAddons/serial/GCPAuth/RealCredentials 0
40 TestAddons/parallel/Olm 0
47 TestAddons/parallel/AmdGpuDevicePlugin 0
51 TestDockerFlags 0
54 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
107 TestFunctional/parallel/DockerEnv 0
108 TestFunctional/parallel/PodmanEnv 0
130 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
131 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
132 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
133 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
134 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
135 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
136 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
137 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
156 TestFunctionalNewestKubernetes 0
157 TestGvisorAddon 0
179 TestImageBuild 0
207 TestKicCustomNetwork 0
208 TestKicExistingNetwork 0
209 TestKicCustomSubnet 0
210 TestKicStaticIP 0
242 TestChangeNoneUser 0
245 TestScheduledStopWindows 0
247 TestSkaffold 0
249 TestInsufficientStorage 0
253 TestMissingContainerUpgrade 0
257 TestNetworkPlugins/group/kubenet 3.54
267 TestNetworkPlugins/group/cilium 3.99
278 TestStartStop/group/disable-driver-mounts 0.2
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.31s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-266876 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.31s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-842088 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-842088

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-842088

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-842088

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-842088

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-842088

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-842088

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-842088

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-842088

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-842088

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-842088

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-842088"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-842088"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-842088"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-842088

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-842088"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-842088"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-842088" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-842088" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-842088" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-842088" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-842088" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-842088" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-842088" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-842088" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-842088"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-842088"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-842088"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-842088"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-842088"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-842088" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-842088" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-842088" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-842088"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-842088"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-842088"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-842088"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-842088"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-842088

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-842088"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-842088"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-842088"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-842088"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-842088"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-842088"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-842088"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-842088"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-842088"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-842088"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-842088"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-842088"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-842088"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-842088"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-842088"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-842088"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-842088"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-842088"

                                                
                                                
----------------------- debugLogs end: kubenet-842088 [took: 3.371959866s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-842088" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-842088
--- SKIP: TestNetworkPlugins/group/kubenet (3.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-842088 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-842088

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-842088

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-842088

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-842088

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-842088

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-842088

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-842088

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-842088

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-842088

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-842088

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842088"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842088"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842088"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-842088

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842088"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842088"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-842088" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-842088" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-842088" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-842088" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-842088" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-842088" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-842088" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-842088" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842088"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842088"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842088"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842088"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842088"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-842088

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-842088

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-842088" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-842088" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-842088

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-842088

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-842088" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-842088" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-842088" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-842088" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-842088" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842088"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842088"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842088"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842088"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842088"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-842088

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842088"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842088"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842088"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842088"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842088"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842088"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842088"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842088"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842088"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842088"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842088"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842088"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842088"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842088"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842088"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842088"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842088"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-842088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842088"

                                                
                                                
----------------------- debugLogs end: cilium-842088 [took: 3.796115279s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-842088" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-842088
--- SKIP: TestNetworkPlugins/group/cilium (3.99s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-427975" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-427975
--- SKIP: TestStartStop/group/disable-driver-mounts (0.20s)

                                                
                                    
Copied to clipboard