Test Report: KVM_Linux_crio 21550

                    
                      0aba0a8e31d541259ffdeb45c9650281430067b8:2025-09-17:41464
                    
                

Test fail (9/324)

x
+
TestAddons/parallel/Ingress (171.61s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-772113 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-772113 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-772113 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [47ae305d-ca3e-4058-a73e-7fbde8abf594] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [47ae305d-ca3e-4058-a73e-7fbde8abf594] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 24.006615202s
I0917 00:02:56.528928  145530 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-772113 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-772113 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m14.294180148s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-772113 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-772113 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.50.205
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-772113 -n addons-772113
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-772113 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-772113 logs -n 25: (1.718784381s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                                ARGS                                                                                                                                                                                                                                                │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-174003                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ download-only-174003 │ jenkins │ v1.37.0 │ 16 Sep 25 23:58 UTC │ 16 Sep 25 23:58 UTC │
	│ start   │ --download-only -p binary-mirror-630200 --alsologtostderr --binary-mirror http://127.0.0.1:36323 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                                                                                                                                                                                                               │ binary-mirror-630200 │ jenkins │ v1.37.0 │ 16 Sep 25 23:58 UTC │                     │
	│ delete  │ -p binary-mirror-630200                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ binary-mirror-630200 │ jenkins │ v1.37.0 │ 16 Sep 25 23:58 UTC │ 16 Sep 25 23:58 UTC │
	│ addons  │ enable dashboard -p addons-772113                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-772113        │ jenkins │ v1.37.0 │ 16 Sep 25 23:58 UTC │                     │
	│ addons  │ disable dashboard -p addons-772113                                                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-772113        │ jenkins │ v1.37.0 │ 16 Sep 25 23:58 UTC │                     │
	│ start   │ -p addons-772113 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-772113        │ jenkins │ v1.37.0 │ 16 Sep 25 23:58 UTC │ 17 Sep 25 00:01 UTC │
	│ addons  │ addons-772113 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-772113        │ jenkins │ v1.37.0 │ 17 Sep 25 00:01 UTC │ 17 Sep 25 00:01 UTC │
	│ addons  │ addons-772113 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-772113        │ jenkins │ v1.37.0 │ 17 Sep 25 00:02 UTC │ 17 Sep 25 00:02 UTC │
	│ addons  │ addons-772113 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-772113        │ jenkins │ v1.37.0 │ 17 Sep 25 00:02 UTC │ 17 Sep 25 00:02 UTC │
	│ addons  │ addons-772113 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-772113        │ jenkins │ v1.37.0 │ 17 Sep 25 00:02 UTC │ 17 Sep 25 00:02 UTC │
	│ ssh     │ addons-772113 ssh cat /opt/local-path-provisioner/pvc-0f753cd3-a1fd-4a21-92c7-ac96b7a52aac_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                                                  │ addons-772113        │ jenkins │ v1.37.0 │ 17 Sep 25 00:02 UTC │ 17 Sep 25 00:02 UTC │
	│ addons  │ addons-772113 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-772113        │ jenkins │ v1.37.0 │ 17 Sep 25 00:02 UTC │ 17 Sep 25 00:03 UTC │
	│ addons  │ addons-772113 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-772113        │ jenkins │ v1.37.0 │ 17 Sep 25 00:02 UTC │ 17 Sep 25 00:02 UTC │
	│ addons  │ enable headlamp -p addons-772113 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-772113        │ jenkins │ v1.37.0 │ 17 Sep 25 00:02 UTC │ 17 Sep 25 00:02 UTC │
	│ ip      │ addons-772113 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-772113        │ jenkins │ v1.37.0 │ 17 Sep 25 00:02 UTC │ 17 Sep 25 00:02 UTC │
	│ addons  │ addons-772113 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-772113        │ jenkins │ v1.37.0 │ 17 Sep 25 00:02 UTC │ 17 Sep 25 00:02 UTC │
	│ addons  │ addons-772113 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-772113        │ jenkins │ v1.37.0 │ 17 Sep 25 00:02 UTC │ 17 Sep 25 00:02 UTC │
	│ addons  │ addons-772113 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-772113        │ jenkins │ v1.37.0 │ 17 Sep 25 00:02 UTC │ 17 Sep 25 00:02 UTC │
	│ ssh     │ addons-772113 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-772113        │ jenkins │ v1.37.0 │ 17 Sep 25 00:02 UTC │                     │
	│ addons  │ addons-772113 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-772113        │ jenkins │ v1.37.0 │ 17 Sep 25 00:02 UTC │ 17 Sep 25 00:02 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-772113                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-772113        │ jenkins │ v1.37.0 │ 17 Sep 25 00:03 UTC │ 17 Sep 25 00:03 UTC │
	│ addons  │ addons-772113 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-772113        │ jenkins │ v1.37.0 │ 17 Sep 25 00:03 UTC │ 17 Sep 25 00:03 UTC │
	│ addons  │ addons-772113 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-772113        │ jenkins │ v1.37.0 │ 17 Sep 25 00:03 UTC │ 17 Sep 25 00:03 UTC │
	│ addons  │ addons-772113 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-772113        │ jenkins │ v1.37.0 │ 17 Sep 25 00:03 UTC │ 17 Sep 25 00:03 UTC │
	│ ip      │ addons-772113 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-772113        │ jenkins │ v1.37.0 │ 17 Sep 25 00:05 UTC │ 17 Sep 25 00:05 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/16 23:58:23
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 23:58:23.698902  146126 out.go:360] Setting OutFile to fd 1 ...
	I0916 23:58:23.699023  146126 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0916 23:58:23.699030  146126 out.go:374] Setting ErrFile to fd 2...
	I0916 23:58:23.699037  146126 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0916 23:58:23.699261  146126 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-141589/.minikube/bin
	I0916 23:58:23.699825  146126 out.go:368] Setting JSON to false
	I0916 23:58:23.700696  146126 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":9648,"bootTime":1758057456,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 23:58:23.700796  146126 start.go:140] virtualization: kvm guest
	I0916 23:58:23.702816  146126 out.go:179] * [addons-772113] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0916 23:58:23.704473  146126 notify.go:220] Checking for updates...
	I0916 23:58:23.704501  146126 out.go:179]   - MINIKUBE_LOCATION=21550
	I0916 23:58:23.707173  146126 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 23:58:23.709028  146126 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-141589/kubeconfig
	I0916 23:58:23.710745  146126 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-141589/.minikube
	I0916 23:58:23.712244  146126 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 23:58:23.713901  146126 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 23:58:23.715391  146126 driver.go:421] Setting default libvirt URI to qemu:///system
	I0916 23:58:23.747672  146126 out.go:179] * Using the kvm2 driver based on user configuration
	I0916 23:58:23.749359  146126 start.go:304] selected driver: kvm2
	I0916 23:58:23.749384  146126 start.go:918] validating driver "kvm2" against <nil>
	I0916 23:58:23.749398  146126 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 23:58:23.750306  146126 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 23:58:23.750394  146126 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21550-141589/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0916 23:58:23.765040  146126 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0916 23:58:23.765121  146126 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0916 23:58:23.765464  146126 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 23:58:23.765507  146126 cni.go:84] Creating CNI manager for ""
	I0916 23:58:23.765567  146126 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0916 23:58:23.765579  146126 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0916 23:58:23.765669  146126 start.go:348] cluster config:
	{Name:addons-772113 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-772113 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 23:58:23.765827  146126 iso.go:125] acquiring lock: {Name:mkbc497934aeda3bf1eaa3e96176da91d2f10b30 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 23:58:23.767625  146126 out.go:179] * Starting "addons-772113" primary control-plane node in "addons-772113" cluster
	I0916 23:58:23.768835  146126 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0916 23:58:23.768896  146126 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21550-141589/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0916 23:58:23.768908  146126 cache.go:58] Caching tarball of preloaded images
	I0916 23:58:23.768993  146126 preload.go:172] Found /home/jenkins/minikube-integration/21550-141589/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 23:58:23.769004  146126 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0916 23:58:23.769296  146126 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/addons-772113/config.json ...
	I0916 23:58:23.769318  146126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/addons-772113/config.json: {Name:mkf3ea24b2bc4ddc584601f0616bc000bbdab850 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:58:23.769470  146126 start.go:360] acquireMachinesLock for addons-772113: {Name:mk4898504d31cc722a10b1787754ef8ecd27d0ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 23:58:23.769516  146126 start.go:364] duration metric: took 32.63µs to acquireMachinesLock for "addons-772113"
	I0916 23:58:23.769532  146126 start.go:93] Provisioning new machine with config: &{Name:addons-772113 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.37.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-772113 Namespa
ce:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 23:58:23.769593  146126 start.go:125] createHost starting for "" (driver="kvm2")
	I0916 23:58:23.771194  146126 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0916 23:58:23.771369  146126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 23:58:23.771421  146126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 23:58:23.784801  146126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42907
	I0916 23:58:23.785424  146126 main.go:141] libmachine: () Calling .GetVersion
	I0916 23:58:23.786075  146126 main.go:141] libmachine: Using API Version  1
	I0916 23:58:23.786101  146126 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 23:58:23.786532  146126 main.go:141] libmachine: () Calling .GetMachineName
	I0916 23:58:23.786739  146126 main.go:141] libmachine: (addons-772113) Calling .GetMachineName
	I0916 23:58:23.786948  146126 main.go:141] libmachine: (addons-772113) Calling .DriverName
	I0916 23:58:23.787127  146126 start.go:159] libmachine.API.Create for "addons-772113" (driver="kvm2")
	I0916 23:58:23.787158  146126 client.go:168] LocalClient.Create starting
	I0916 23:58:23.787207  146126 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21550-141589/.minikube/certs/ca.pem
	I0916 23:58:23.991679  146126 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21550-141589/.minikube/certs/cert.pem
	I0916 23:58:24.309580  146126 main.go:141] libmachine: Running pre-create checks...
	I0916 23:58:24.309618  146126 main.go:141] libmachine: (addons-772113) Calling .PreCreateCheck
	I0916 23:58:24.310197  146126 main.go:141] libmachine: (addons-772113) Calling .GetConfigRaw
	I0916 23:58:24.310703  146126 main.go:141] libmachine: Creating machine...
	I0916 23:58:24.310720  146126 main.go:141] libmachine: (addons-772113) Calling .Create
	I0916 23:58:24.310911  146126 main.go:141] libmachine: (addons-772113) creating domain...
	I0916 23:58:24.310937  146126 main.go:141] libmachine: (addons-772113) creating network...
	I0916 23:58:24.312602  146126 main.go:141] libmachine: (addons-772113) DBG | found existing default network
	I0916 23:58:24.312807  146126 main.go:141] libmachine: (addons-772113) DBG | <network>
	I0916 23:58:24.312831  146126 main.go:141] libmachine: (addons-772113) DBG |   <name>default</name>
	I0916 23:58:24.312844  146126 main.go:141] libmachine: (addons-772113) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I0916 23:58:24.312867  146126 main.go:141] libmachine: (addons-772113) DBG |   <forward mode='nat'>
	I0916 23:58:24.312880  146126 main.go:141] libmachine: (addons-772113) DBG |     <nat>
	I0916 23:58:24.312889  146126 main.go:141] libmachine: (addons-772113) DBG |       <port start='1024' end='65535'/>
	I0916 23:58:24.312900  146126 main.go:141] libmachine: (addons-772113) DBG |     </nat>
	I0916 23:58:24.312907  146126 main.go:141] libmachine: (addons-772113) DBG |   </forward>
	I0916 23:58:24.312917  146126 main.go:141] libmachine: (addons-772113) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I0916 23:58:24.312928  146126 main.go:141] libmachine: (addons-772113) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I0916 23:58:24.312949  146126 main.go:141] libmachine: (addons-772113) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I0916 23:58:24.312964  146126 main.go:141] libmachine: (addons-772113) DBG |     <dhcp>
	I0916 23:58:24.312972  146126 main.go:141] libmachine: (addons-772113) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I0916 23:58:24.312978  146126 main.go:141] libmachine: (addons-772113) DBG |     </dhcp>
	I0916 23:58:24.312983  146126 main.go:141] libmachine: (addons-772113) DBG |   </ip>
	I0916 23:58:24.312989  146126 main.go:141] libmachine: (addons-772113) DBG | </network>
	I0916 23:58:24.312998  146126 main.go:141] libmachine: (addons-772113) DBG | 
	I0916 23:58:24.313424  146126 main.go:141] libmachine: (addons-772113) DBG | I0916 23:58:24.313260  146148 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:ae:ed:78} reservation:<nil>}
	I0916 23:58:24.313818  146126 main.go:141] libmachine: (addons-772113) DBG | I0916 23:58:24.313729  146148 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000013cc0}
	I0916 23:58:24.313884  146126 main.go:141] libmachine: (addons-772113) DBG | defining private network:
	I0916 23:58:24.313902  146126 main.go:141] libmachine: (addons-772113) DBG | 
	I0916 23:58:24.313908  146126 main.go:141] libmachine: (addons-772113) DBG | <network>
	I0916 23:58:24.313913  146126 main.go:141] libmachine: (addons-772113) DBG |   <name>mk-addons-772113</name>
	I0916 23:58:24.313918  146126 main.go:141] libmachine: (addons-772113) DBG |   <dns enable='no'/>
	I0916 23:58:24.313928  146126 main.go:141] libmachine: (addons-772113) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0916 23:58:24.313934  146126 main.go:141] libmachine: (addons-772113) DBG |     <dhcp>
	I0916 23:58:24.313941  146126 main.go:141] libmachine: (addons-772113) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0916 23:58:24.313961  146126 main.go:141] libmachine: (addons-772113) DBG |     </dhcp>
	I0916 23:58:24.313974  146126 main.go:141] libmachine: (addons-772113) DBG |   </ip>
	I0916 23:58:24.313982  146126 main.go:141] libmachine: (addons-772113) DBG | </network>
	I0916 23:58:24.313987  146126 main.go:141] libmachine: (addons-772113) DBG | 
	I0916 23:58:24.320384  146126 main.go:141] libmachine: (addons-772113) DBG | creating private network mk-addons-772113 192.168.50.0/24...
	I0916 23:58:24.393388  146126 main.go:141] libmachine: (addons-772113) DBG | private network mk-addons-772113 192.168.50.0/24 created
	I0916 23:58:24.393788  146126 main.go:141] libmachine: (addons-772113) DBG | <network>
	I0916 23:58:24.393814  146126 main.go:141] libmachine: (addons-772113) setting up store path in /home/jenkins/minikube-integration/21550-141589/.minikube/machines/addons-772113 ...
	I0916 23:58:24.393824  146126 main.go:141] libmachine: (addons-772113) DBG |   <name>mk-addons-772113</name>
	I0916 23:58:24.393835  146126 main.go:141] libmachine: (addons-772113) DBG |   <uuid>8d94f231-7360-4259-ad8b-0842db2400c0</uuid>
	I0916 23:58:24.393843  146126 main.go:141] libmachine: (addons-772113) DBG |   <bridge name='virbr2' stp='on' delay='0'/>
	I0916 23:58:24.393870  146126 main.go:141] libmachine: (addons-772113) building disk image from file:///home/jenkins/minikube-integration/21550-141589/.minikube/cache/iso/amd64/minikube-v1.37.0-amd64.iso
	I0916 23:58:24.393908  146126 main.go:141] libmachine: (addons-772113) DBG |   <mac address='52:54:00:c2:22:43'/>
	I0916 23:58:24.393938  146126 main.go:141] libmachine: (addons-772113) DBG |   <dns enable='no'/>
	I0916 23:58:24.393954  146126 main.go:141] libmachine: (addons-772113) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0916 23:58:24.393991  146126 main.go:141] libmachine: (addons-772113) DBG |     <dhcp>
	I0916 23:58:24.394007  146126 main.go:141] libmachine: (addons-772113) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0916 23:58:24.394015  146126 main.go:141] libmachine: (addons-772113) DBG |     </dhcp>
	I0916 23:58:24.394024  146126 main.go:141] libmachine: (addons-772113) DBG |   </ip>
	I0916 23:58:24.394032  146126 main.go:141] libmachine: (addons-772113) DBG | </network>
	I0916 23:58:24.394049  146126 main.go:141] libmachine: (addons-772113) DBG | 
	I0916 23:58:24.394065  146126 main.go:141] libmachine: (addons-772113) DBG | I0916 23:58:24.393879  146148 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21550-141589/.minikube
	I0916 23:58:24.394194  146126 main.go:141] libmachine: (addons-772113) Downloading /home/jenkins/minikube-integration/21550-141589/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21550-141589/.minikube/cache/iso/amd64/minikube-v1.37.0-amd64.iso...
	I0916 23:58:24.641978  146126 main.go:141] libmachine: (addons-772113) DBG | I0916 23:58:24.641748  146148 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21550-141589/.minikube/machines/addons-772113/id_rsa...
	I0916 23:58:24.854422  146126 main.go:141] libmachine: (addons-772113) DBG | I0916 23:58:24.854265  146148 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21550-141589/.minikube/machines/addons-772113/addons-772113.rawdisk...
	I0916 23:58:24.854484  146126 main.go:141] libmachine: (addons-772113) DBG | Writing magic tar header
	I0916 23:58:24.854498  146126 main.go:141] libmachine: (addons-772113) DBG | Writing SSH key tar header
	I0916 23:58:24.854506  146126 main.go:141] libmachine: (addons-772113) DBG | I0916 23:58:24.854428  146148 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21550-141589/.minikube/machines/addons-772113 ...
	I0916 23:58:24.854559  146126 main.go:141] libmachine: (addons-772113) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21550-141589/.minikube/machines/addons-772113
	I0916 23:58:24.854596  146126 main.go:141] libmachine: (addons-772113) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21550-141589/.minikube/machines
	I0916 23:58:24.854619  146126 main.go:141] libmachine: (addons-772113) setting executable bit set on /home/jenkins/minikube-integration/21550-141589/.minikube/machines/addons-772113 (perms=drwx------)
	I0916 23:58:24.854632  146126 main.go:141] libmachine: (addons-772113) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21550-141589/.minikube
	I0916 23:58:24.854646  146126 main.go:141] libmachine: (addons-772113) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21550-141589
	I0916 23:58:24.854661  146126 main.go:141] libmachine: (addons-772113) setting executable bit set on /home/jenkins/minikube-integration/21550-141589/.minikube/machines (perms=drwxr-xr-x)
	I0916 23:58:24.854671  146126 main.go:141] libmachine: (addons-772113) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0916 23:58:24.854690  146126 main.go:141] libmachine: (addons-772113) DBG | checking permissions on dir: /home/jenkins
	I0916 23:58:24.854696  146126 main.go:141] libmachine: (addons-772113) DBG | checking permissions on dir: /home
	I0916 23:58:24.854721  146126 main.go:141] libmachine: (addons-772113) setting executable bit set on /home/jenkins/minikube-integration/21550-141589/.minikube (perms=drwxr-xr-x)
	I0916 23:58:24.854739  146126 main.go:141] libmachine: (addons-772113) setting executable bit set on /home/jenkins/minikube-integration/21550-141589 (perms=drwxrwxr-x)
	I0916 23:58:24.854746  146126 main.go:141] libmachine: (addons-772113) DBG | skipping /home - not owner
	I0916 23:58:24.854760  146126 main.go:141] libmachine: (addons-772113) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0916 23:58:24.854768  146126 main.go:141] libmachine: (addons-772113) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0916 23:58:24.854776  146126 main.go:141] libmachine: (addons-772113) defining domain...
	I0916 23:58:24.856442  146126 main.go:141] libmachine: (addons-772113) defining domain using XML: 
	I0916 23:58:24.856457  146126 main.go:141] libmachine: (addons-772113) <domain type='kvm'>
	I0916 23:58:24.856466  146126 main.go:141] libmachine: (addons-772113)   <name>addons-772113</name>
	I0916 23:58:24.856478  146126 main.go:141] libmachine: (addons-772113)   <memory unit='MiB'>4096</memory>
	I0916 23:58:24.856485  146126 main.go:141] libmachine: (addons-772113)   <vcpu>2</vcpu>
	I0916 23:58:24.856490  146126 main.go:141] libmachine: (addons-772113)   <features>
	I0916 23:58:24.856497  146126 main.go:141] libmachine: (addons-772113)     <acpi/>
	I0916 23:58:24.856503  146126 main.go:141] libmachine: (addons-772113)     <apic/>
	I0916 23:58:24.856519  146126 main.go:141] libmachine: (addons-772113)     <pae/>
	I0916 23:58:24.856530  146126 main.go:141] libmachine: (addons-772113)   </features>
	I0916 23:58:24.856540  146126 main.go:141] libmachine: (addons-772113)   <cpu mode='host-passthrough'>
	I0916 23:58:24.856547  146126 main.go:141] libmachine: (addons-772113)   </cpu>
	I0916 23:58:24.856556  146126 main.go:141] libmachine: (addons-772113)   <os>
	I0916 23:58:24.856566  146126 main.go:141] libmachine: (addons-772113)     <type>hvm</type>
	I0916 23:58:24.856588  146126 main.go:141] libmachine: (addons-772113)     <boot dev='cdrom'/>
	I0916 23:58:24.856604  146126 main.go:141] libmachine: (addons-772113)     <boot dev='hd'/>
	I0916 23:58:24.856617  146126 main.go:141] libmachine: (addons-772113)     <bootmenu enable='no'/>
	I0916 23:58:24.856624  146126 main.go:141] libmachine: (addons-772113)   </os>
	I0916 23:58:24.856633  146126 main.go:141] libmachine: (addons-772113)   <devices>
	I0916 23:58:24.856641  146126 main.go:141] libmachine: (addons-772113)     <disk type='file' device='cdrom'>
	I0916 23:58:24.856655  146126 main.go:141] libmachine: (addons-772113)       <source file='/home/jenkins/minikube-integration/21550-141589/.minikube/machines/addons-772113/boot2docker.iso'/>
	I0916 23:58:24.856666  146126 main.go:141] libmachine: (addons-772113)       <target dev='hdc' bus='scsi'/>
	I0916 23:58:24.856683  146126 main.go:141] libmachine: (addons-772113)       <readonly/>
	I0916 23:58:24.856697  146126 main.go:141] libmachine: (addons-772113)     </disk>
	I0916 23:58:24.856708  146126 main.go:141] libmachine: (addons-772113)     <disk type='file' device='disk'>
	I0916 23:58:24.856720  146126 main.go:141] libmachine: (addons-772113)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0916 23:58:24.856734  146126 main.go:141] libmachine: (addons-772113)       <source file='/home/jenkins/minikube-integration/21550-141589/.minikube/machines/addons-772113/addons-772113.rawdisk'/>
	I0916 23:58:24.856745  146126 main.go:141] libmachine: (addons-772113)       <target dev='hda' bus='virtio'/>
	I0916 23:58:24.856753  146126 main.go:141] libmachine: (addons-772113)     </disk>
	I0916 23:58:24.856761  146126 main.go:141] libmachine: (addons-772113)     <interface type='network'>
	I0916 23:58:24.856773  146126 main.go:141] libmachine: (addons-772113)       <source network='mk-addons-772113'/>
	I0916 23:58:24.856787  146126 main.go:141] libmachine: (addons-772113)       <model type='virtio'/>
	I0916 23:58:24.856799  146126 main.go:141] libmachine: (addons-772113)     </interface>
	I0916 23:58:24.856809  146126 main.go:141] libmachine: (addons-772113)     <interface type='network'>
	I0916 23:58:24.856819  146126 main.go:141] libmachine: (addons-772113)       <source network='default'/>
	I0916 23:58:24.856829  146126 main.go:141] libmachine: (addons-772113)       <model type='virtio'/>
	I0916 23:58:24.856838  146126 main.go:141] libmachine: (addons-772113)     </interface>
	I0916 23:58:24.856847  146126 main.go:141] libmachine: (addons-772113)     <serial type='pty'>
	I0916 23:58:24.856894  146126 main.go:141] libmachine: (addons-772113)       <target port='0'/>
	I0916 23:58:24.856918  146126 main.go:141] libmachine: (addons-772113)     </serial>
	I0916 23:58:24.856927  146126 main.go:141] libmachine: (addons-772113)     <console type='pty'>
	I0916 23:58:24.856936  146126 main.go:141] libmachine: (addons-772113)       <target type='serial' port='0'/>
	I0916 23:58:24.856944  146126 main.go:141] libmachine: (addons-772113)     </console>
	I0916 23:58:24.856970  146126 main.go:141] libmachine: (addons-772113)     <rng model='virtio'>
	I0916 23:58:24.856987  146126 main.go:141] libmachine: (addons-772113)       <backend model='random'>/dev/random</backend>
	I0916 23:58:24.857000  146126 main.go:141] libmachine: (addons-772113)     </rng>
	I0916 23:58:24.857028  146126 main.go:141] libmachine: (addons-772113)   </devices>
	I0916 23:58:24.857040  146126 main.go:141] libmachine: (addons-772113) </domain>
	I0916 23:58:24.857056  146126 main.go:141] libmachine: (addons-772113) 
	I0916 23:58:24.862206  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined MAC address 52:54:00:34:95:9d in network default
	I0916 23:58:24.862883  146126 main.go:141] libmachine: (addons-772113) starting domain...
	I0916 23:58:24.862924  146126 main.go:141] libmachine: (addons-772113) ensuring networks are active...
	I0916 23:58:24.862937  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:58:24.863796  146126 main.go:141] libmachine: (addons-772113) Ensuring network default is active
	I0916 23:58:24.864393  146126 main.go:141] libmachine: (addons-772113) Ensuring network mk-addons-772113 is active
	I0916 23:58:24.865140  146126 main.go:141] libmachine: (addons-772113) getting domain XML...
	I0916 23:58:24.866424  146126 main.go:141] libmachine: (addons-772113) DBG | starting domain XML:
	I0916 23:58:24.866453  146126 main.go:141] libmachine: (addons-772113) DBG | <domain type='kvm'>
	I0916 23:58:24.866464  146126 main.go:141] libmachine: (addons-772113) DBG |   <name>addons-772113</name>
	I0916 23:58:24.866471  146126 main.go:141] libmachine: (addons-772113) DBG |   <uuid>ef75d235-f922-4e16-bc5c-4ff1216a6de0</uuid>
	I0916 23:58:24.866757  146126 main.go:141] libmachine: (addons-772113) DBG |   <memory unit='KiB'>4194304</memory>
	I0916 23:58:24.866900  146126 main.go:141] libmachine: (addons-772113) DBG |   <currentMemory unit='KiB'>4194304</currentMemory>
	I0916 23:58:24.867942  146126 main.go:141] libmachine: (addons-772113) DBG |   <vcpu placement='static'>2</vcpu>
	I0916 23:58:24.868224  146126 main.go:141] libmachine: (addons-772113) DBG |   <os>
	I0916 23:58:24.868252  146126 main.go:141] libmachine: (addons-772113) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I0916 23:58:24.868259  146126 main.go:141] libmachine: (addons-772113) DBG |     <boot dev='cdrom'/>
	I0916 23:58:24.868265  146126 main.go:141] libmachine: (addons-772113) DBG |     <boot dev='hd'/>
	I0916 23:58:24.868270  146126 main.go:141] libmachine: (addons-772113) DBG |     <bootmenu enable='no'/>
	I0916 23:58:24.868277  146126 main.go:141] libmachine: (addons-772113) DBG |   </os>
	I0916 23:58:24.868281  146126 main.go:141] libmachine: (addons-772113) DBG |   <features>
	I0916 23:58:24.868287  146126 main.go:141] libmachine: (addons-772113) DBG |     <acpi/>
	I0916 23:58:24.868290  146126 main.go:141] libmachine: (addons-772113) DBG |     <apic/>
	I0916 23:58:24.868295  146126 main.go:141] libmachine: (addons-772113) DBG |     <pae/>
	I0916 23:58:24.868299  146126 main.go:141] libmachine: (addons-772113) DBG |   </features>
	I0916 23:58:24.868307  146126 main.go:141] libmachine: (addons-772113) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I0916 23:58:24.868316  146126 main.go:141] libmachine: (addons-772113) DBG |   <clock offset='utc'/>
	I0916 23:58:24.868326  146126 main.go:141] libmachine: (addons-772113) DBG |   <on_poweroff>destroy</on_poweroff>
	I0916 23:58:24.868337  146126 main.go:141] libmachine: (addons-772113) DBG |   <on_reboot>restart</on_reboot>
	I0916 23:58:24.868344  146126 main.go:141] libmachine: (addons-772113) DBG |   <on_crash>destroy</on_crash>
	I0916 23:58:24.868351  146126 main.go:141] libmachine: (addons-772113) DBG |   <devices>
	I0916 23:58:24.868358  146126 main.go:141] libmachine: (addons-772113) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I0916 23:58:24.868365  146126 main.go:141] libmachine: (addons-772113) DBG |     <disk type='file' device='cdrom'>
	I0916 23:58:24.868370  146126 main.go:141] libmachine: (addons-772113) DBG |       <driver name='qemu' type='raw'/>
	I0916 23:58:24.868377  146126 main.go:141] libmachine: (addons-772113) DBG |       <source file='/home/jenkins/minikube-integration/21550-141589/.minikube/machines/addons-772113/boot2docker.iso'/>
	I0916 23:58:24.868383  146126 main.go:141] libmachine: (addons-772113) DBG |       <target dev='hdc' bus='scsi'/>
	I0916 23:58:24.868398  146126 main.go:141] libmachine: (addons-772113) DBG |       <readonly/>
	I0916 23:58:24.868488  146126 main.go:141] libmachine: (addons-772113) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I0916 23:58:24.868517  146126 main.go:141] libmachine: (addons-772113) DBG |     </disk>
	I0916 23:58:24.868531  146126 main.go:141] libmachine: (addons-772113) DBG |     <disk type='file' device='disk'>
	I0916 23:58:24.868544  146126 main.go:141] libmachine: (addons-772113) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I0916 23:58:24.868560  146126 main.go:141] libmachine: (addons-772113) DBG |       <source file='/home/jenkins/minikube-integration/21550-141589/.minikube/machines/addons-772113/addons-772113.rawdisk'/>
	I0916 23:58:24.868572  146126 main.go:141] libmachine: (addons-772113) DBG |       <target dev='hda' bus='virtio'/>
	I0916 23:58:24.868586  146126 main.go:141] libmachine: (addons-772113) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I0916 23:58:24.868593  146126 main.go:141] libmachine: (addons-772113) DBG |     </disk>
	I0916 23:58:24.868610  146126 main.go:141] libmachine: (addons-772113) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I0916 23:58:24.868625  146126 main.go:141] libmachine: (addons-772113) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I0916 23:58:24.868637  146126 main.go:141] libmachine: (addons-772113) DBG |     </controller>
	I0916 23:58:24.868650  146126 main.go:141] libmachine: (addons-772113) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I0916 23:58:24.868664  146126 main.go:141] libmachine: (addons-772113) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I0916 23:58:24.868677  146126 main.go:141] libmachine: (addons-772113) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I0916 23:58:24.868690  146126 main.go:141] libmachine: (addons-772113) DBG |     </controller>
	I0916 23:58:24.868706  146126 main.go:141] libmachine: (addons-772113) DBG |     <interface type='network'>
	I0916 23:58:24.868718  146126 main.go:141] libmachine: (addons-772113) DBG |       <mac address='52:54:00:1a:9c:db'/>
	I0916 23:58:24.868725  146126 main.go:141] libmachine: (addons-772113) DBG |       <source network='mk-addons-772113'/>
	I0916 23:58:24.868735  146126 main.go:141] libmachine: (addons-772113) DBG |       <model type='virtio'/>
	I0916 23:58:24.868744  146126 main.go:141] libmachine: (addons-772113) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I0916 23:58:24.868753  146126 main.go:141] libmachine: (addons-772113) DBG |     </interface>
	I0916 23:58:24.868764  146126 main.go:141] libmachine: (addons-772113) DBG |     <interface type='network'>
	I0916 23:58:24.868776  146126 main.go:141] libmachine: (addons-772113) DBG |       <mac address='52:54:00:34:95:9d'/>
	I0916 23:58:24.868791  146126 main.go:141] libmachine: (addons-772113) DBG |       <source network='default'/>
	I0916 23:58:24.868845  146126 main.go:141] libmachine: (addons-772113) DBG |       <model type='virtio'/>
	I0916 23:58:24.868891  146126 main.go:141] libmachine: (addons-772113) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I0916 23:58:24.868913  146126 main.go:141] libmachine: (addons-772113) DBG |     </interface>
	I0916 23:58:24.868932  146126 main.go:141] libmachine: (addons-772113) DBG |     <serial type='pty'>
	I0916 23:58:24.868956  146126 main.go:141] libmachine: (addons-772113) DBG |       <target type='isa-serial' port='0'>
	I0916 23:58:24.868978  146126 main.go:141] libmachine: (addons-772113) DBG |         <model name='isa-serial'/>
	I0916 23:58:24.868995  146126 main.go:141] libmachine: (addons-772113) DBG |       </target>
	I0916 23:58:24.869006  146126 main.go:141] libmachine: (addons-772113) DBG |     </serial>
	I0916 23:58:24.869015  146126 main.go:141] libmachine: (addons-772113) DBG |     <console type='pty'>
	I0916 23:58:24.869025  146126 main.go:141] libmachine: (addons-772113) DBG |       <target type='serial' port='0'/>
	I0916 23:58:24.869074  146126 main.go:141] libmachine: (addons-772113) DBG |     </console>
	I0916 23:58:24.869096  146126 main.go:141] libmachine: (addons-772113) DBG |     <input type='mouse' bus='ps2'/>
	I0916 23:58:24.869189  146126 main.go:141] libmachine: (addons-772113) DBG |     <input type='keyboard' bus='ps2'/>
	I0916 23:58:24.869227  146126 main.go:141] libmachine: (addons-772113) DBG |     <audio id='1' type='none'/>
	I0916 23:58:24.869240  146126 main.go:141] libmachine: (addons-772113) DBG |     <memballoon model='virtio'>
	I0916 23:58:24.869258  146126 main.go:141] libmachine: (addons-772113) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I0916 23:58:24.869271  146126 main.go:141] libmachine: (addons-772113) DBG |     </memballoon>
	I0916 23:58:24.869281  146126 main.go:141] libmachine: (addons-772113) DBG |     <rng model='virtio'>
	I0916 23:58:24.869288  146126 main.go:141] libmachine: (addons-772113) DBG |       <backend model='random'>/dev/random</backend>
	I0916 23:58:24.869300  146126 main.go:141] libmachine: (addons-772113) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I0916 23:58:24.869312  146126 main.go:141] libmachine: (addons-772113) DBG |     </rng>
	I0916 23:58:24.869323  146126 main.go:141] libmachine: (addons-772113) DBG |   </devices>
	I0916 23:58:24.869333  146126 main.go:141] libmachine: (addons-772113) DBG | </domain>
	I0916 23:58:24.869342  146126 main.go:141] libmachine: (addons-772113) DBG | 
	I0916 23:58:26.156304  146126 main.go:141] libmachine: (addons-772113) waiting for domain to start...
	I0916 23:58:26.157930  146126 main.go:141] libmachine: (addons-772113) domain is now running
	I0916 23:58:26.157959  146126 main.go:141] libmachine: (addons-772113) waiting for IP...
	I0916 23:58:26.158781  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:58:26.159612  146126 main.go:141] libmachine: (addons-772113) DBG | no network interface addresses found for domain addons-772113 (source=lease)
	I0916 23:58:26.159649  146126 main.go:141] libmachine: (addons-772113) DBG | trying to list again with source=arp
	I0916 23:58:26.159877  146126 main.go:141] libmachine: (addons-772113) DBG | unable to find current IP address of domain addons-772113 in network mk-addons-772113 (interfaces detected: [])
	I0916 23:58:26.159976  146126 main.go:141] libmachine: (addons-772113) DBG | I0916 23:58:26.159908  146148 retry.go:31] will retry after 296.678939ms: waiting for domain to come up
	I0916 23:58:26.458692  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:58:26.459397  146126 main.go:141] libmachine: (addons-772113) DBG | no network interface addresses found for domain addons-772113 (source=lease)
	I0916 23:58:26.459422  146126 main.go:141] libmachine: (addons-772113) DBG | trying to list again with source=arp
	I0916 23:58:26.459730  146126 main.go:141] libmachine: (addons-772113) DBG | unable to find current IP address of domain addons-772113 in network mk-addons-772113 (interfaces detected: [])
	I0916 23:58:26.459758  146126 main.go:141] libmachine: (addons-772113) DBG | I0916 23:58:26.459685  146148 retry.go:31] will retry after 314.938939ms: waiting for domain to come up
	I0916 23:58:26.776389  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:58:26.777055  146126 main.go:141] libmachine: (addons-772113) DBG | no network interface addresses found for domain addons-772113 (source=lease)
	I0916 23:58:26.777086  146126 main.go:141] libmachine: (addons-772113) DBG | trying to list again with source=arp
	I0916 23:58:26.777359  146126 main.go:141] libmachine: (addons-772113) DBG | unable to find current IP address of domain addons-772113 in network mk-addons-772113 (interfaces detected: [])
	I0916 23:58:26.777393  146126 main.go:141] libmachine: (addons-772113) DBG | I0916 23:58:26.777336  146148 retry.go:31] will retry after 391.011172ms: waiting for domain to come up
	I0916 23:58:27.170061  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:58:27.170682  146126 main.go:141] libmachine: (addons-772113) DBG | no network interface addresses found for domain addons-772113 (source=lease)
	I0916 23:58:27.170712  146126 main.go:141] libmachine: (addons-772113) DBG | trying to list again with source=arp
	I0916 23:58:27.171060  146126 main.go:141] libmachine: (addons-772113) DBG | unable to find current IP address of domain addons-772113 in network mk-addons-772113 (interfaces detected: [])
	I0916 23:58:27.171083  146126 main.go:141] libmachine: (addons-772113) DBG | I0916 23:58:27.171045  146148 retry.go:31] will retry after 585.886558ms: waiting for domain to come up
	I0916 23:58:27.759071  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:58:27.759730  146126 main.go:141] libmachine: (addons-772113) DBG | no network interface addresses found for domain addons-772113 (source=lease)
	I0916 23:58:27.759753  146126 main.go:141] libmachine: (addons-772113) DBG | trying to list again with source=arp
	I0916 23:58:27.760069  146126 main.go:141] libmachine: (addons-772113) DBG | unable to find current IP address of domain addons-772113 in network mk-addons-772113 (interfaces detected: [])
	I0916 23:58:27.760094  146126 main.go:141] libmachine: (addons-772113) DBG | I0916 23:58:27.760036  146148 retry.go:31] will retry after 617.563877ms: waiting for domain to come up
	I0916 23:58:28.378936  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:58:28.379635  146126 main.go:141] libmachine: (addons-772113) DBG | no network interface addresses found for domain addons-772113 (source=lease)
	I0916 23:58:28.379657  146126 main.go:141] libmachine: (addons-772113) DBG | trying to list again with source=arp
	I0916 23:58:28.380001  146126 main.go:141] libmachine: (addons-772113) DBG | unable to find current IP address of domain addons-772113 in network mk-addons-772113 (interfaces detected: [])
	I0916 23:58:28.380035  146126 main.go:141] libmachine: (addons-772113) DBG | I0916 23:58:28.379947  146148 retry.go:31] will retry after 695.387159ms: waiting for domain to come up
	I0916 23:58:29.076978  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:58:29.077688  146126 main.go:141] libmachine: (addons-772113) DBG | no network interface addresses found for domain addons-772113 (source=lease)
	I0916 23:58:29.077712  146126 main.go:141] libmachine: (addons-772113) DBG | trying to list again with source=arp
	I0916 23:58:29.078073  146126 main.go:141] libmachine: (addons-772113) DBG | unable to find current IP address of domain addons-772113 in network mk-addons-772113 (interfaces detected: [])
	I0916 23:58:29.078107  146126 main.go:141] libmachine: (addons-772113) DBG | I0916 23:58:29.078027  146148 retry.go:31] will retry after 963.255032ms: waiting for domain to come up
	I0916 23:58:30.043170  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:58:30.043869  146126 main.go:141] libmachine: (addons-772113) DBG | no network interface addresses found for domain addons-772113 (source=lease)
	I0916 23:58:30.043896  146126 main.go:141] libmachine: (addons-772113) DBG | trying to list again with source=arp
	I0916 23:58:30.044185  146126 main.go:141] libmachine: (addons-772113) DBG | unable to find current IP address of domain addons-772113 in network mk-addons-772113 (interfaces detected: [])
	I0916 23:58:30.044212  146126 main.go:141] libmachine: (addons-772113) DBG | I0916 23:58:30.044147  146148 retry.go:31] will retry after 1.457531244s: waiting for domain to come up
	I0916 23:58:31.504273  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:58:31.505073  146126 main.go:141] libmachine: (addons-772113) DBG | no network interface addresses found for domain addons-772113 (source=lease)
	I0916 23:58:31.505093  146126 main.go:141] libmachine: (addons-772113) DBG | trying to list again with source=arp
	I0916 23:58:31.505460  146126 main.go:141] libmachine: (addons-772113) DBG | unable to find current IP address of domain addons-772113 in network mk-addons-772113 (interfaces detected: [])
	I0916 23:58:31.505503  146126 main.go:141] libmachine: (addons-772113) DBG | I0916 23:58:31.505448  146148 retry.go:31] will retry after 1.20306368s: waiting for domain to come up
	I0916 23:58:32.709917  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:58:32.710454  146126 main.go:141] libmachine: (addons-772113) DBG | no network interface addresses found for domain addons-772113 (source=lease)
	I0916 23:58:32.710478  146126 main.go:141] libmachine: (addons-772113) DBG | trying to list again with source=arp
	I0916 23:58:32.710704  146126 main.go:141] libmachine: (addons-772113) DBG | unable to find current IP address of domain addons-772113 in network mk-addons-772113 (interfaces detected: [])
	I0916 23:58:32.710747  146126 main.go:141] libmachine: (addons-772113) DBG | I0916 23:58:32.710709  146148 retry.go:31] will retry after 2.111221156s: waiting for domain to come up
	I0916 23:58:34.824263  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:58:34.824947  146126 main.go:141] libmachine: (addons-772113) DBG | no network interface addresses found for domain addons-772113 (source=lease)
	I0916 23:58:34.824983  146126 main.go:141] libmachine: (addons-772113) DBG | trying to list again with source=arp
	I0916 23:58:34.825248  146126 main.go:141] libmachine: (addons-772113) DBG | unable to find current IP address of domain addons-772113 in network mk-addons-772113 (interfaces detected: [])
	I0916 23:58:34.825309  146126 main.go:141] libmachine: (addons-772113) DBG | I0916 23:58:34.825237  146148 retry.go:31] will retry after 1.797101116s: waiting for domain to come up
	I0916 23:58:36.625467  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:58:36.626127  146126 main.go:141] libmachine: (addons-772113) DBG | no network interface addresses found for domain addons-772113 (source=lease)
	I0916 23:58:36.626151  146126 main.go:141] libmachine: (addons-772113) DBG | trying to list again with source=arp
	I0916 23:58:36.626387  146126 main.go:141] libmachine: (addons-772113) DBG | unable to find current IP address of domain addons-772113 in network mk-addons-772113 (interfaces detected: [])
	I0916 23:58:36.626414  146126 main.go:141] libmachine: (addons-772113) DBG | I0916 23:58:36.626365  146148 retry.go:31] will retry after 3.033302775s: waiting for domain to come up
	I0916 23:58:39.661465  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:58:39.662389  146126 main.go:141] libmachine: (addons-772113) DBG | no network interface addresses found for domain addons-772113 (source=lease)
	I0916 23:58:39.662425  146126 main.go:141] libmachine: (addons-772113) DBG | trying to list again with source=arp
	I0916 23:58:39.662718  146126 main.go:141] libmachine: (addons-772113) DBG | unable to find current IP address of domain addons-772113 in network mk-addons-772113 (interfaces detected: [])
	I0916 23:58:39.662775  146126 main.go:141] libmachine: (addons-772113) DBG | I0916 23:58:39.662717  146148 retry.go:31] will retry after 4.377649217s: waiting for domain to come up
	I0916 23:58:44.042320  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:58:44.043020  146126 main.go:141] libmachine: (addons-772113) found domain IP: 192.168.50.205
	I0916 23:58:44.043038  146126 main.go:141] libmachine: (addons-772113) reserving static IP address...
	I0916 23:58:44.043047  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has current primary IP address 192.168.50.205 and MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:58:44.043507  146126 main.go:141] libmachine: (addons-772113) DBG | unable to find host DHCP lease matching {name: "addons-772113", mac: "52:54:00:1a:9c:db", ip: "192.168.50.205"} in network mk-addons-772113
	I0916 23:58:44.240197  146126 main.go:141] libmachine: (addons-772113) DBG | Getting to WaitForSSH function...
	I0916 23:58:44.240239  146126 main.go:141] libmachine: (addons-772113) reserved static IP address 192.168.50.205 for domain addons-772113
	I0916 23:58:44.240258  146126 main.go:141] libmachine: (addons-772113) waiting for SSH...
	I0916 23:58:44.243439  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:58:44.244110  146126 main.go:141] libmachine: (addons-772113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:9c:db", ip: ""} in network mk-addons-772113: {Iface:virbr2 ExpiryTime:2025-09-17 00:58:40 +0000 UTC Type:0 Mac:52:54:00:1a:9c:db Iaid: IPaddr:192.168.50.205 Prefix:24 Hostname:minikube Clientid:01:52:54:00:1a:9c:db}
	I0916 23:58:44.244153  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined IP address 192.168.50.205 and MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:58:44.244385  146126 main.go:141] libmachine: (addons-772113) DBG | Using SSH client type: external
	I0916 23:58:44.244528  146126 main.go:141] libmachine: (addons-772113) DBG | Using SSH private key: /home/jenkins/minikube-integration/21550-141589/.minikube/machines/addons-772113/id_rsa (-rw-------)
	I0916 23:58:44.244565  146126 main.go:141] libmachine: (addons-772113) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.205 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21550-141589/.minikube/machines/addons-772113/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0916 23:58:44.244585  146126 main.go:141] libmachine: (addons-772113) DBG | About to run SSH command:
	I0916 23:58:44.244604  146126 main.go:141] libmachine: (addons-772113) DBG | exit 0
	I0916 23:58:44.378015  146126 main.go:141] libmachine: (addons-772113) DBG | SSH cmd err, output: <nil>: 
	I0916 23:58:44.378366  146126 main.go:141] libmachine: (addons-772113) domain creation complete
	I0916 23:58:44.378681  146126 main.go:141] libmachine: (addons-772113) Calling .GetConfigRaw
	I0916 23:58:44.379366  146126 main.go:141] libmachine: (addons-772113) Calling .DriverName
	I0916 23:58:44.379586  146126 main.go:141] libmachine: (addons-772113) Calling .DriverName
	I0916 23:58:44.379801  146126 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0916 23:58:44.379820  146126 main.go:141] libmachine: (addons-772113) Calling .GetState
	I0916 23:58:44.381441  146126 main.go:141] libmachine: Detecting operating system of created instance...
	I0916 23:58:44.381457  146126 main.go:141] libmachine: Waiting for SSH to be available...
	I0916 23:58:44.381464  146126 main.go:141] libmachine: Getting to WaitForSSH function...
	I0916 23:58:44.381469  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHHostname
	I0916 23:58:44.384258  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:58:44.384658  146126 main.go:141] libmachine: (addons-772113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:9c:db", ip: ""} in network mk-addons-772113: {Iface:virbr2 ExpiryTime:2025-09-17 00:58:40 +0000 UTC Type:0 Mac:52:54:00:1a:9c:db Iaid: IPaddr:192.168.50.205 Prefix:24 Hostname:addons-772113 Clientid:01:52:54:00:1a:9c:db}
	I0916 23:58:44.384683  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined IP address 192.168.50.205 and MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:58:44.384892  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHPort
	I0916 23:58:44.385071  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHKeyPath
	I0916 23:58:44.385255  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHKeyPath
	I0916 23:58:44.385406  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHUsername
	I0916 23:58:44.385566  146126 main.go:141] libmachine: Using SSH client type: native
	I0916 23:58:44.385868  146126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.50.205 22 <nil> <nil>}
	I0916 23:58:44.385887  146126 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0916 23:58:44.495618  146126 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 23:58:44.495656  146126 main.go:141] libmachine: Detecting the provisioner...
	I0916 23:58:44.495668  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHHostname
	I0916 23:58:44.499168  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:58:44.499641  146126 main.go:141] libmachine: (addons-772113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:9c:db", ip: ""} in network mk-addons-772113: {Iface:virbr2 ExpiryTime:2025-09-17 00:58:40 +0000 UTC Type:0 Mac:52:54:00:1a:9c:db Iaid: IPaddr:192.168.50.205 Prefix:24 Hostname:addons-772113 Clientid:01:52:54:00:1a:9c:db}
	I0916 23:58:44.499670  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined IP address 192.168.50.205 and MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:58:44.499845  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHPort
	I0916 23:58:44.500089  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHKeyPath
	I0916 23:58:44.500260  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHKeyPath
	I0916 23:58:44.500447  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHUsername
	I0916 23:58:44.500613  146126 main.go:141] libmachine: Using SSH client type: native
	I0916 23:58:44.500907  146126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.50.205 22 <nil> <nil>}
	I0916 23:58:44.500928  146126 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0916 23:58:44.611609  146126 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I0916 23:58:44.611721  146126 main.go:141] libmachine: found compatible host: buildroot
	I0916 23:58:44.611737  146126 main.go:141] libmachine: Provisioning with buildroot...
	I0916 23:58:44.611750  146126 main.go:141] libmachine: (addons-772113) Calling .GetMachineName
	I0916 23:58:44.612090  146126 buildroot.go:166] provisioning hostname "addons-772113"
	I0916 23:58:44.612130  146126 main.go:141] libmachine: (addons-772113) Calling .GetMachineName
	I0916 23:58:44.612373  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHHostname
	I0916 23:58:44.615771  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:58:44.616265  146126 main.go:141] libmachine: (addons-772113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:9c:db", ip: ""} in network mk-addons-772113: {Iface:virbr2 ExpiryTime:2025-09-17 00:58:40 +0000 UTC Type:0 Mac:52:54:00:1a:9c:db Iaid: IPaddr:192.168.50.205 Prefix:24 Hostname:addons-772113 Clientid:01:52:54:00:1a:9c:db}
	I0916 23:58:44.616290  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined IP address 192.168.50.205 and MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:58:44.616496  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHPort
	I0916 23:58:44.616703  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHKeyPath
	I0916 23:58:44.616886  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHKeyPath
	I0916 23:58:44.617061  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHUsername
	I0916 23:58:44.617238  146126 main.go:141] libmachine: Using SSH client type: native
	I0916 23:58:44.617492  146126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.50.205 22 <nil> <nil>}
	I0916 23:58:44.617515  146126 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-772113 && echo "addons-772113" | sudo tee /etc/hostname
	I0916 23:58:44.741401  146126 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-772113
	
	I0916 23:58:44.741432  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHHostname
	I0916 23:58:44.744558  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:58:44.745125  146126 main.go:141] libmachine: (addons-772113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:9c:db", ip: ""} in network mk-addons-772113: {Iface:virbr2 ExpiryTime:2025-09-17 00:58:40 +0000 UTC Type:0 Mac:52:54:00:1a:9c:db Iaid: IPaddr:192.168.50.205 Prefix:24 Hostname:addons-772113 Clientid:01:52:54:00:1a:9c:db}
	I0916 23:58:44.745159  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined IP address 192.168.50.205 and MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:58:44.745431  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHPort
	I0916 23:58:44.745660  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHKeyPath
	I0916 23:58:44.745844  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHKeyPath
	I0916 23:58:44.746040  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHUsername
	I0916 23:58:44.746215  146126 main.go:141] libmachine: Using SSH client type: native
	I0916 23:58:44.746427  146126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.50.205 22 <nil> <nil>}
	I0916 23:58:44.746450  146126 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-772113' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-772113/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-772113' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 23:58:44.865428  146126 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 23:58:44.865480  146126 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21550-141589/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-141589/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-141589/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-141589/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-141589/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-141589/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-141589/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-141589/.minikube}
	I0916 23:58:44.865529  146126 buildroot.go:174] setting up certificates
	I0916 23:58:44.865543  146126 provision.go:84] configureAuth start
	I0916 23:58:44.865558  146126 main.go:141] libmachine: (addons-772113) Calling .GetMachineName
	I0916 23:58:44.865935  146126 main.go:141] libmachine: (addons-772113) Calling .GetIP
	I0916 23:58:44.870497  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:58:44.871048  146126 main.go:141] libmachine: (addons-772113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:9c:db", ip: ""} in network mk-addons-772113: {Iface:virbr2 ExpiryTime:2025-09-17 00:58:40 +0000 UTC Type:0 Mac:52:54:00:1a:9c:db Iaid: IPaddr:192.168.50.205 Prefix:24 Hostname:addons-772113 Clientid:01:52:54:00:1a:9c:db}
	I0916 23:58:44.871082  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined IP address 192.168.50.205 and MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:58:44.871348  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHHostname
	I0916 23:58:44.874072  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:58:44.874532  146126 main.go:141] libmachine: (addons-772113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:9c:db", ip: ""} in network mk-addons-772113: {Iface:virbr2 ExpiryTime:2025-09-17 00:58:40 +0000 UTC Type:0 Mac:52:54:00:1a:9c:db Iaid: IPaddr:192.168.50.205 Prefix:24 Hostname:addons-772113 Clientid:01:52:54:00:1a:9c:db}
	I0916 23:58:44.874553  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined IP address 192.168.50.205 and MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:58:44.874793  146126 provision.go:143] copyHostCerts
	I0916 23:58:44.874889  146126 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-141589/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-141589/.minikube/ca.pem (1078 bytes)
	I0916 23:58:44.875049  146126 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-141589/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-141589/.minikube/cert.pem (1123 bytes)
	I0916 23:58:44.875229  146126 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-141589/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-141589/.minikube/key.pem (1675 bytes)
	I0916 23:58:44.875330  146126 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-141589/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-141589/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-141589/.minikube/certs/ca-key.pem org=jenkins.addons-772113 san=[127.0.0.1 192.168.50.205 addons-772113 localhost minikube]
	I0916 23:58:45.264926  146126 provision.go:177] copyRemoteCerts
	I0916 23:58:45.264985  146126 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 23:58:45.265011  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHHostname
	I0916 23:58:45.268658  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:58:45.269062  146126 main.go:141] libmachine: (addons-772113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:9c:db", ip: ""} in network mk-addons-772113: {Iface:virbr2 ExpiryTime:2025-09-17 00:58:40 +0000 UTC Type:0 Mac:52:54:00:1a:9c:db Iaid: IPaddr:192.168.50.205 Prefix:24 Hostname:addons-772113 Clientid:01:52:54:00:1a:9c:db}
	I0916 23:58:45.269104  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined IP address 192.168.50.205 and MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:58:45.269329  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHPort
	I0916 23:58:45.269546  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHKeyPath
	I0916 23:58:45.269761  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHUsername
	I0916 23:58:45.269932  146126 sshutil.go:53] new ssh client: &{IP:192.168.50.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21550-141589/.minikube/machines/addons-772113/id_rsa Username:docker}
	I0916 23:58:45.356481  146126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-141589/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 23:58:45.387973  146126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-141589/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 23:58:45.422501  146126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-141589/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 23:58:45.481021  146126 provision.go:87] duration metric: took 615.454449ms to configureAuth
	I0916 23:58:45.481058  146126 buildroot.go:189] setting minikube options for container-runtime
	I0916 23:58:45.481288  146126 config.go:182] Loaded profile config "addons-772113": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0916 23:58:45.481390  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHHostname
	I0916 23:58:45.484745  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:58:45.485271  146126 main.go:141] libmachine: (addons-772113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:9c:db", ip: ""} in network mk-addons-772113: {Iface:virbr2 ExpiryTime:2025-09-17 00:58:40 +0000 UTC Type:0 Mac:52:54:00:1a:9c:db Iaid: IPaddr:192.168.50.205 Prefix:24 Hostname:addons-772113 Clientid:01:52:54:00:1a:9c:db}
	I0916 23:58:45.485298  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined IP address 192.168.50.205 and MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:58:45.485487  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHPort
	I0916 23:58:45.485768  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHKeyPath
	I0916 23:58:45.485973  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHKeyPath
	I0916 23:58:45.486192  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHUsername
	I0916 23:58:45.486410  146126 main.go:141] libmachine: Using SSH client type: native
	I0916 23:58:45.486624  146126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.50.205 22 <nil> <nil>}
	I0916 23:58:45.486639  146126 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 23:58:46.054956  146126 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 23:58:46.054986  146126 main.go:141] libmachine: Checking connection to Docker...
	I0916 23:58:46.054998  146126 main.go:141] libmachine: (addons-772113) Calling .GetURL
	I0916 23:58:46.056634  146126 main.go:141] libmachine: (addons-772113) DBG | using libvirt version 8000000
	I0916 23:58:46.059373  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:58:46.059943  146126 main.go:141] libmachine: (addons-772113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:9c:db", ip: ""} in network mk-addons-772113: {Iface:virbr2 ExpiryTime:2025-09-17 00:58:40 +0000 UTC Type:0 Mac:52:54:00:1a:9c:db Iaid: IPaddr:192.168.50.205 Prefix:24 Hostname:addons-772113 Clientid:01:52:54:00:1a:9c:db}
	I0916 23:58:46.059984  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined IP address 192.168.50.205 and MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:58:46.060237  146126 main.go:141] libmachine: Docker is up and running!
	I0916 23:58:46.060253  146126 main.go:141] libmachine: Reticulating splines...
	I0916 23:58:46.060260  146126 client.go:171] duration metric: took 22.273094393s to LocalClient.Create
	I0916 23:58:46.060287  146126 start.go:167] duration metric: took 22.273161996s to libmachine.API.Create "addons-772113"
	I0916 23:58:46.060306  146126 start.go:293] postStartSetup for "addons-772113" (driver="kvm2")
	I0916 23:58:46.060336  146126 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 23:58:46.060358  146126 main.go:141] libmachine: (addons-772113) Calling .DriverName
	I0916 23:58:46.060679  146126 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 23:58:46.060707  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHHostname
	I0916 23:58:46.063691  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:58:46.064093  146126 main.go:141] libmachine: (addons-772113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:9c:db", ip: ""} in network mk-addons-772113: {Iface:virbr2 ExpiryTime:2025-09-17 00:58:40 +0000 UTC Type:0 Mac:52:54:00:1a:9c:db Iaid: IPaddr:192.168.50.205 Prefix:24 Hostname:addons-772113 Clientid:01:52:54:00:1a:9c:db}
	I0916 23:58:46.064128  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined IP address 192.168.50.205 and MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:58:46.064332  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHPort
	I0916 23:58:46.064667  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHKeyPath
	I0916 23:58:46.064926  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHUsername
	I0916 23:58:46.065101  146126 sshutil.go:53] new ssh client: &{IP:192.168.50.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21550-141589/.minikube/machines/addons-772113/id_rsa Username:docker}
	I0916 23:58:46.152171  146126 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 23:58:46.157071  146126 info.go:137] Remote host: Buildroot 2025.02
	I0916 23:58:46.157099  146126 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-141589/.minikube/addons for local assets ...
	I0916 23:58:46.157196  146126 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-141589/.minikube/files for local assets ...
	I0916 23:58:46.157232  146126 start.go:296] duration metric: took 96.916117ms for postStartSetup
	I0916 23:58:46.157295  146126 main.go:141] libmachine: (addons-772113) Calling .GetConfigRaw
	I0916 23:58:46.157942  146126 main.go:141] libmachine: (addons-772113) Calling .GetIP
	I0916 23:58:46.161020  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:58:46.161631  146126 main.go:141] libmachine: (addons-772113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:9c:db", ip: ""} in network mk-addons-772113: {Iface:virbr2 ExpiryTime:2025-09-17 00:58:40 +0000 UTC Type:0 Mac:52:54:00:1a:9c:db Iaid: IPaddr:192.168.50.205 Prefix:24 Hostname:addons-772113 Clientid:01:52:54:00:1a:9c:db}
	I0916 23:58:46.161660  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined IP address 192.168.50.205 and MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:58:46.162020  146126 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/addons-772113/config.json ...
	I0916 23:58:46.162221  146126 start.go:128] duration metric: took 22.39261654s to createHost
	I0916 23:58:46.162246  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHHostname
	I0916 23:58:46.164665  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:58:46.165034  146126 main.go:141] libmachine: (addons-772113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:9c:db", ip: ""} in network mk-addons-772113: {Iface:virbr2 ExpiryTime:2025-09-17 00:58:40 +0000 UTC Type:0 Mac:52:54:00:1a:9c:db Iaid: IPaddr:192.168.50.205 Prefix:24 Hostname:addons-772113 Clientid:01:52:54:00:1a:9c:db}
	I0916 23:58:46.165065  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined IP address 192.168.50.205 and MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:58:46.165243  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHPort
	I0916 23:58:46.165529  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHKeyPath
	I0916 23:58:46.165727  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHKeyPath
	I0916 23:58:46.165930  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHUsername
	I0916 23:58:46.166148  146126 main.go:141] libmachine: Using SSH client type: native
	I0916 23:58:46.166357  146126 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.50.205 22 <nil> <nil>}
	I0916 23:58:46.166367  146126 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 23:58:46.275789  146126 main.go:141] libmachine: SSH cmd err, output: <nil>: 1758067126.240085444
	
	I0916 23:58:46.275827  146126 fix.go:216] guest clock: 1758067126.240085444
	I0916 23:58:46.275840  146126 fix.go:229] Guest: 2025-09-16 23:58:46.240085444 +0000 UTC Remote: 2025-09-16 23:58:46.162233219 +0000 UTC m=+22.501033386 (delta=77.852225ms)
	I0916 23:58:46.275923  146126 fix.go:200] guest clock delta is within tolerance: 77.852225ms
	I0916 23:58:46.275935  146126 start.go:83] releasing machines lock for "addons-772113", held for 22.506410963s
	I0916 23:58:46.275979  146126 main.go:141] libmachine: (addons-772113) Calling .DriverName
	I0916 23:58:46.276297  146126 main.go:141] libmachine: (addons-772113) Calling .GetIP
	I0916 23:58:46.279488  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:58:46.279896  146126 main.go:141] libmachine: (addons-772113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:9c:db", ip: ""} in network mk-addons-772113: {Iface:virbr2 ExpiryTime:2025-09-17 00:58:40 +0000 UTC Type:0 Mac:52:54:00:1a:9c:db Iaid: IPaddr:192.168.50.205 Prefix:24 Hostname:addons-772113 Clientid:01:52:54:00:1a:9c:db}
	I0916 23:58:46.279927  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined IP address 192.168.50.205 and MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:58:46.280111  146126 main.go:141] libmachine: (addons-772113) Calling .DriverName
	I0916 23:58:46.280635  146126 main.go:141] libmachine: (addons-772113) Calling .DriverName
	I0916 23:58:46.280828  146126 main.go:141] libmachine: (addons-772113) Calling .DriverName
	I0916 23:58:46.280938  146126 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 23:58:46.280999  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHHostname
	I0916 23:58:46.281059  146126 ssh_runner.go:195] Run: cat /version.json
	I0916 23:58:46.281087  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHHostname
	I0916 23:58:46.284286  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:58:46.284383  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:58:46.284733  146126 main.go:141] libmachine: (addons-772113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:9c:db", ip: ""} in network mk-addons-772113: {Iface:virbr2 ExpiryTime:2025-09-17 00:58:40 +0000 UTC Type:0 Mac:52:54:00:1a:9c:db Iaid: IPaddr:192.168.50.205 Prefix:24 Hostname:addons-772113 Clientid:01:52:54:00:1a:9c:db}
	I0916 23:58:46.284761  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined IP address 192.168.50.205 and MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:58:46.284786  146126 main.go:141] libmachine: (addons-772113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:9c:db", ip: ""} in network mk-addons-772113: {Iface:virbr2 ExpiryTime:2025-09-17 00:58:40 +0000 UTC Type:0 Mac:52:54:00:1a:9c:db Iaid: IPaddr:192.168.50.205 Prefix:24 Hostname:addons-772113 Clientid:01:52:54:00:1a:9c:db}
	I0916 23:58:46.284800  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined IP address 192.168.50.205 and MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:58:46.284971  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHPort
	I0916 23:58:46.285156  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHPort
	I0916 23:58:46.285178  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHKeyPath
	I0916 23:58:46.285324  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHKeyPath
	I0916 23:58:46.285402  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHUsername
	I0916 23:58:46.285467  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHUsername
	I0916 23:58:46.285563  146126 sshutil.go:53] new ssh client: &{IP:192.168.50.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21550-141589/.minikube/machines/addons-772113/id_rsa Username:docker}
	I0916 23:58:46.285574  146126 sshutil.go:53] new ssh client: &{IP:192.168.50.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21550-141589/.minikube/machines/addons-772113/id_rsa Username:docker}
	I0916 23:58:46.369812  146126 ssh_runner.go:195] Run: systemctl --version
	I0916 23:58:46.398924  146126 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 23:58:46.561360  146126 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0916 23:58:46.568553  146126 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 23:58:46.568639  146126 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 23:58:46.589943  146126 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0916 23:58:46.589982  146126 start.go:495] detecting cgroup driver to use...
	I0916 23:58:46.590064  146126 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 23:58:46.610611  146126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 23:58:46.628499  146126 docker.go:218] disabling cri-docker service (if available) ...
	I0916 23:58:46.628561  146126 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 23:58:46.644764  146126 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 23:58:46.662189  146126 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 23:58:46.806152  146126 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 23:58:46.955667  146126 docker.go:234] disabling docker service ...
	I0916 23:58:46.955744  146126 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 23:58:46.972404  146126 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 23:58:46.987770  146126 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 23:58:47.201383  146126 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 23:58:47.350367  146126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 23:58:47.367148  146126 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 23:58:47.390577  146126 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0916 23:58:47.390670  146126 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 23:58:47.404261  146126 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 23:58:47.404339  146126 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 23:58:47.417714  146126 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 23:58:47.431100  146126 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 23:58:47.444646  146126 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 23:58:47.458608  146126 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 23:58:47.472066  146126 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 23:58:47.493671  146126 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 23:58:47.506619  146126 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 23:58:47.518141  146126 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0916 23:58:47.518222  146126 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0916 23:58:47.538574  146126 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 23:58:47.550367  146126 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:58:47.697514  146126 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 23:58:47.808957  146126 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 23:58:47.809056  146126 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 23:58:47.814898  146126 start.go:563] Will wait 60s for crictl version
	I0916 23:58:47.815008  146126 ssh_runner.go:195] Run: which crictl
	I0916 23:58:47.819430  146126 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 23:58:47.863717  146126 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0916 23:58:47.863821  146126 ssh_runner.go:195] Run: crio --version
	I0916 23:58:47.895037  146126 ssh_runner.go:195] Run: crio --version
	I0916 23:58:47.928791  146126 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.29.1 ...
	I0916 23:58:47.930109  146126 main.go:141] libmachine: (addons-772113) Calling .GetIP
	I0916 23:58:47.933452  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:58:47.933912  146126 main.go:141] libmachine: (addons-772113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:9c:db", ip: ""} in network mk-addons-772113: {Iface:virbr2 ExpiryTime:2025-09-17 00:58:40 +0000 UTC Type:0 Mac:52:54:00:1a:9c:db Iaid: IPaddr:192.168.50.205 Prefix:24 Hostname:addons-772113 Clientid:01:52:54:00:1a:9c:db}
	I0916 23:58:47.933937  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined IP address 192.168.50.205 and MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:58:47.934188  146126 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0916 23:58:47.939713  146126 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:58:47.956385  146126 kubeadm.go:875] updating cluster {Name:addons-772113 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.37.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-772113 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.205 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCo
reDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 23:58:47.956534  146126 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0916 23:58:47.956604  146126 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 23:58:47.993163  146126 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.0". assuming images are not preloaded.
	I0916 23:58:47.993269  146126 ssh_runner.go:195] Run: which lz4
	I0916 23:58:47.997873  146126 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0916 23:58:48.002763  146126 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0916 23:58:48.002809  146126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-141589/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409455026 bytes)
	I0916 23:58:49.512948  146126 crio.go:462] duration metric: took 1.515131946s to copy over tarball
	I0916 23:58:49.513028  146126 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0916 23:58:51.118352  146126 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.605295623s)
	I0916 23:58:51.118380  146126 crio.go:469] duration metric: took 1.605404353s to extract the tarball
	I0916 23:58:51.118389  146126 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0916 23:58:51.161133  146126 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 23:58:51.209052  146126 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 23:58:51.209080  146126 cache_images.go:85] Images are preloaded, skipping loading
	I0916 23:58:51.209089  146126 kubeadm.go:926] updating node { 192.168.50.205 8443 v1.34.0 crio true true} ...
	I0916 23:58:51.209281  146126 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-772113 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.205
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:addons-772113 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 23:58:51.209354  146126 ssh_runner.go:195] Run: crio config
	I0916 23:58:51.259599  146126 cni.go:84] Creating CNI manager for ""
	I0916 23:58:51.259629  146126 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0916 23:58:51.259643  146126 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 23:58:51.259665  146126 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.205 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-772113 NodeName:addons-772113 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.205"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.205 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 23:58:51.259793  146126 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.205
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-772113"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.205"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.205"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 23:58:51.259890  146126 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0916 23:58:51.272660  146126 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 23:58:51.272738  146126 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 23:58:51.285660  146126 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0916 23:58:51.307673  146126 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 23:58:51.329269  146126 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I0916 23:58:51.350724  146126 ssh_runner.go:195] Run: grep 192.168.50.205	control-plane.minikube.internal$ /etc/hosts
	I0916 23:58:51.355687  146126 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.205	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 23:58:51.371354  146126 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:58:51.515382  146126 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 23:58:51.548592  146126 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/addons-772113 for IP: 192.168.50.205
	I0916 23:58:51.548622  146126 certs.go:194] generating shared ca certs ...
	I0916 23:58:51.548646  146126 certs.go:226] acquiring lock for ca certs: {Name:mk9185d5103eebb4e8c41dd45f840888861a3f37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:58:51.548825  146126 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-141589/.minikube/ca.key
	I0916 23:58:51.703335  146126 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-141589/.minikube/ca.crt ...
	I0916 23:58:51.703365  146126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-141589/.minikube/ca.crt: {Name:mk89fe3191f235fcd02394e86bb91d550fb267b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:58:51.703563  146126 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-141589/.minikube/ca.key ...
	I0916 23:58:51.703579  146126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-141589/.minikube/ca.key: {Name:mkf728791df73e1d8171478f71d0648d030b2ff7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:58:51.703668  146126 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-141589/.minikube/proxy-client-ca.key
	I0916 23:58:52.322528  146126 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-141589/.minikube/proxy-client-ca.crt ...
	I0916 23:58:52.322567  146126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-141589/.minikube/proxy-client-ca.crt: {Name:mk45637966c01d1a6dc79da389d8b63622da6b6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:58:52.322738  146126 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-141589/.minikube/proxy-client-ca.key ...
	I0916 23:58:52.322749  146126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-141589/.minikube/proxy-client-ca.key: {Name:mka5820e0486275f8fb1df4c0a2f555395a92210 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:58:52.322821  146126 certs.go:256] generating profile certs ...
	I0916 23:58:52.322911  146126 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/addons-772113/client.key
	I0916 23:58:52.322933  146126 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/addons-772113/client.crt with IP's: []
	I0916 23:58:52.602091  146126 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/addons-772113/client.crt ...
	I0916 23:58:52.602135  146126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/addons-772113/client.crt: {Name:mk0cb2059ee49c9dfaad586a6faec239285d751c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:58:52.602339  146126 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/addons-772113/client.key ...
	I0916 23:58:52.602353  146126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/addons-772113/client.key: {Name:mk5e3c73d2304ab213218e25d7e7b041939b2c7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:58:52.602438  146126 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/addons-772113/apiserver.key.25b04cb9
	I0916 23:58:52.602461  146126 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/addons-772113/apiserver.crt.25b04cb9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.205]
	I0916 23:58:52.833073  146126 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/addons-772113/apiserver.crt.25b04cb9 ...
	I0916 23:58:52.833106  146126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/addons-772113/apiserver.crt.25b04cb9: {Name:mkfe2f4336402230fa6a373b282a465a71665135 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:58:52.833286  146126 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/addons-772113/apiserver.key.25b04cb9 ...
	I0916 23:58:52.833314  146126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/addons-772113/apiserver.key.25b04cb9: {Name:mk06576ef6760c6eafa8905c8660cf28f79981ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:58:52.833390  146126 certs.go:381] copying /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/addons-772113/apiserver.crt.25b04cb9 -> /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/addons-772113/apiserver.crt
	I0916 23:58:52.833484  146126 certs.go:385] copying /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/addons-772113/apiserver.key.25b04cb9 -> /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/addons-772113/apiserver.key
	I0916 23:58:52.833533  146126 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/addons-772113/proxy-client.key
	I0916 23:58:52.833552  146126 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/addons-772113/proxy-client.crt with IP's: []
	I0916 23:58:53.135881  146126 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/addons-772113/proxy-client.crt ...
	I0916 23:58:53.135922  146126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/addons-772113/proxy-client.crt: {Name:mk9b619e5a7b7a7a6d1d2e4114bb9d85b8c88d06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:58:53.136093  146126 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/addons-772113/proxy-client.key ...
	I0916 23:58:53.136123  146126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/addons-772113/proxy-client.key: {Name:mk57bcf2514b19fc9a4ac08c5f8dca83f712ba02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:58:53.136347  146126 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-141589/.minikube/certs/ca-key.pem (1675 bytes)
	I0916 23:58:53.136388  146126 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-141589/.minikube/certs/ca.pem (1078 bytes)
	I0916 23:58:53.136415  146126 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-141589/.minikube/certs/cert.pem (1123 bytes)
	I0916 23:58:53.136436  146126 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-141589/.minikube/certs/key.pem (1675 bytes)
	I0916 23:58:53.137098  146126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-141589/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 23:58:53.182312  146126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-141589/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 23:58:53.227022  146126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-141589/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 23:58:53.258308  146126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-141589/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 23:58:53.289431  146126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/addons-772113/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0916 23:58:53.323028  146126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/addons-772113/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 23:58:53.357818  146126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/addons-772113/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 23:58:53.390683  146126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/addons-772113/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 23:58:53.422341  146126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-141589/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 23:58:53.453941  146126 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 23:58:53.475586  146126 ssh_runner.go:195] Run: openssl version
	I0916 23:58:53.482603  146126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 23:58:53.496472  146126 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:58:53.502080  146126 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:58 /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:58:53.502158  146126 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 23:58:53.510068  146126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 23:58:53.524055  146126 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 23:58:53.529107  146126 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 23:58:53.529177  146126 kubeadm.go:392] StartCluster: {Name:addons-772113 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.37.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-772113 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.205 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreD
NSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 23:58:53.529308  146126 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 23:58:53.529360  146126 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 23:58:53.571588  146126 cri.go:89] found id: ""
	I0916 23:58:53.571670  146126 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 23:58:53.584594  146126 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 23:58:53.596838  146126 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 23:58:53.609057  146126 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 23:58:53.609082  146126 kubeadm.go:157] found existing configuration files:
	
	I0916 23:58:53.609141  146126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 23:58:53.620192  146126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 23:58:53.620284  146126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 23:58:53.632503  146126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 23:58:53.643423  146126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 23:58:53.643507  146126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 23:58:53.655573  146126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 23:58:53.666354  146126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 23:58:53.666415  146126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 23:58:53.678662  146126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 23:58:53.689929  146126 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 23:58:53.689993  146126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 23:58:53.701933  146126 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0916 23:58:53.754357  146126 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0916 23:58:53.754459  146126 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 23:58:53.862488  146126 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 23:58:53.862622  146126 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 23:58:53.862747  146126 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 23:58:53.878599  146126 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 23:58:54.014620  146126 out.go:252]   - Generating certificates and keys ...
	I0916 23:58:54.014743  146126 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 23:58:54.014831  146126 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 23:58:54.014995  146126 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 23:58:54.179081  146126 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 23:58:54.515673  146126 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 23:58:54.739493  146126 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 23:58:54.805577  146126 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 23:58:54.805749  146126 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-772113 localhost] and IPs [192.168.50.205 127.0.0.1 ::1]
	I0916 23:58:54.905482  146126 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 23:58:54.905689  146126 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-772113 localhost] and IPs [192.168.50.205 127.0.0.1 ::1]
	I0916 23:58:55.360837  146126 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 23:58:55.486102  146126 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 23:58:55.846312  146126 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 23:58:55.846417  146126 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 23:58:56.164536  146126 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 23:58:56.466986  146126 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 23:58:56.776623  146126 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 23:58:56.873361  146126 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 23:58:57.266115  146126 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 23:58:57.266779  146126 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 23:58:57.269171  146126 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 23:58:57.271192  146126 out.go:252]   - Booting up control plane ...
	I0916 23:58:57.271320  146126 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 23:58:57.271450  146126 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 23:58:57.271586  146126 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 23:58:57.288957  146126 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 23:58:57.289160  146126 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0916 23:58:57.297206  146126 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0916 23:58:57.297976  146126 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 23:58:57.298070  146126 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 23:58:57.477288  146126 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 23:58:57.477448  146126 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 23:58:58.977828  146126 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.501902109s
	I0916 23:58:58.982035  146126 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0916 23:58:58.982169  146126 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.50.205:8443/livez
	I0916 23:58:58.982321  146126 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0916 23:58:58.982474  146126 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0916 23:59:02.229151  146126 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 3.248640561s
	I0916 23:59:02.991032  146126 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 4.011364189s
	I0916 23:59:04.980271  146126 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 6.001752822s
	I0916 23:59:05.001890  146126 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 23:59:05.023984  146126 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 23:59:05.050807  146126 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 23:59:05.051010  146126 kubeadm.go:310] [mark-control-plane] Marking the node addons-772113 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 23:59:05.068296  146126 kubeadm.go:310] [bootstrap-token] Using token: ve2izu.5phpq2r20zzmegom
	I0916 23:59:05.069978  146126 out.go:252]   - Configuring RBAC rules ...
	I0916 23:59:05.070152  146126 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 23:59:05.079374  146126 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 23:59:05.088341  146126 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 23:59:05.092332  146126 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 23:59:05.099298  146126 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 23:59:05.103663  146126 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 23:59:05.388554  146126 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 23:59:05.841940  146126 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 23:59:06.387772  146126 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 23:59:06.390449  146126 kubeadm.go:310] 
	I0916 23:59:06.390526  146126 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 23:59:06.390536  146126 kubeadm.go:310] 
	I0916 23:59:06.390642  146126 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 23:59:06.390662  146126 kubeadm.go:310] 
	I0916 23:59:06.390685  146126 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 23:59:06.390741  146126 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 23:59:06.390792  146126 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 23:59:06.390798  146126 kubeadm.go:310] 
	I0916 23:59:06.390843  146126 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 23:59:06.390865  146126 kubeadm.go:310] 
	I0916 23:59:06.390966  146126 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 23:59:06.390999  146126 kubeadm.go:310] 
	I0916 23:59:06.391098  146126 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 23:59:06.391204  146126 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 23:59:06.391282  146126 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 23:59:06.391291  146126 kubeadm.go:310] 
	I0916 23:59:06.391377  146126 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 23:59:06.391501  146126 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 23:59:06.391526  146126 kubeadm.go:310] 
	I0916 23:59:06.391659  146126 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ve2izu.5phpq2r20zzmegom \
	I0916 23:59:06.391796  146126 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:eba1e7cbf10ad46ae3add200d92658a4c628981bbffffc064505e9ec25d71153 \
	I0916 23:59:06.391834  146126 kubeadm.go:310] 	--control-plane 
	I0916 23:59:06.391867  146126 kubeadm.go:310] 
	I0916 23:59:06.391983  146126 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 23:59:06.391992  146126 kubeadm.go:310] 
	I0916 23:59:06.392063  146126 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ve2izu.5phpq2r20zzmegom \
	I0916 23:59:06.392151  146126 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:eba1e7cbf10ad46ae3add200d92658a4c628981bbffffc064505e9ec25d71153 
	I0916 23:59:06.394588  146126 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 23:59:06.394617  146126 cni.go:84] Creating CNI manager for ""
	I0916 23:59:06.394624  146126 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0916 23:59:06.396494  146126 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I0916 23:59:06.398119  146126 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0916 23:59:06.411536  146126 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0916 23:59:06.439532  146126 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 23:59:06.439666  146126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:59:06.439682  146126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-772113 minikube.k8s.io/updated_at=2025_09_16T23_59_06_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a minikube.k8s.io/name=addons-772113 minikube.k8s.io/primary=true
	I0916 23:59:06.488011  146126 ops.go:34] apiserver oom_adj: -16
	I0916 23:59:06.609458  146126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:59:07.110180  146126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:59:07.610527  146126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:59:08.110393  146126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:59:08.609574  146126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:59:09.109572  146126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:59:09.609689  146126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:59:10.109987  146126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:59:10.610435  146126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 23:59:10.698300  146126 kubeadm.go:1105] duration metric: took 4.258711301s to wait for elevateKubeSystemPrivileges
	I0916 23:59:10.698351  146126 kubeadm.go:394] duration metric: took 17.169180539s to StartCluster
	I0916 23:59:10.698380  146126 settings.go:142] acquiring lock: {Name:mkba5c2f6664f4802b257b08a521179f4376b493 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:59:10.698543  146126 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21550-141589/kubeconfig
	I0916 23:59:10.699096  146126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-141589/kubeconfig: {Name:mk94de3540a2264fcc25d797d3876af7c7bbc524 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:59:10.699376  146126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 23:59:10.699401  146126 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.205 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 23:59:10.699462  146126 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0916 23:59:10.699604  146126 config.go:182] Loaded profile config "addons-772113": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0916 23:59:10.699633  146126 addons.go:69] Setting yakd=true in profile "addons-772113"
	I0916 23:59:10.699645  146126 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-772113"
	I0916 23:59:10.699661  146126 addons.go:69] Setting default-storageclass=true in profile "addons-772113"
	I0916 23:59:10.699637  146126 addons.go:69] Setting inspektor-gadget=true in profile "addons-772113"
	I0916 23:59:10.699669  146126 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-772113"
	I0916 23:59:10.699675  146126 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-772113"
	I0916 23:59:10.699684  146126 addons.go:69] Setting volcano=true in profile "addons-772113"
	I0916 23:59:10.699707  146126 addons.go:238] Setting addon volcano=true in "addons-772113"
	I0916 23:59:10.699716  146126 addons.go:69] Setting gcp-auth=true in profile "addons-772113"
	I0916 23:59:10.699742  146126 host.go:66] Checking if "addons-772113" exists ...
	I0916 23:59:10.699745  146126 addons.go:69] Setting registry-creds=true in profile "addons-772113"
	I0916 23:59:10.699785  146126 addons.go:238] Setting addon registry-creds=true in "addons-772113"
	I0916 23:59:10.699807  146126 mustload.go:65] Loading cluster: addons-772113
	I0916 23:59:10.699818  146126 addons.go:69] Setting storage-provisioner=true in profile "addons-772113"
	I0916 23:59:10.699833  146126 addons.go:238] Setting addon storage-provisioner=true in "addons-772113"
	I0916 23:59:10.699880  146126 host.go:66] Checking if "addons-772113" exists ...
	I0916 23:59:10.699708  146126 host.go:66] Checking if "addons-772113" exists ...
	I0916 23:59:10.700377  146126 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-772113"
	I0916 23:59:10.700420  146126 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-772113"
	I0916 23:59:10.700635  146126 addons.go:69] Setting volumesnapshots=true in profile "addons-772113"
	I0916 23:59:10.700654  146126 addons.go:238] Setting addon volumesnapshots=true in "addons-772113"
	I0916 23:59:10.700647  146126 host.go:66] Checking if "addons-772113" exists ...
	I0916 23:59:10.700687  146126 host.go:66] Checking if "addons-772113" exists ...
	I0916 23:59:10.700720  146126 addons.go:69] Setting registry=true in profile "addons-772113"
	I0916 23:59:10.700740  146126 addons.go:238] Setting addon registry=true in "addons-772113"
	I0916 23:59:10.700734  146126 addons.go:69] Setting ingress-dns=true in profile "addons-772113"
	I0916 23:59:10.700768  146126 addons.go:238] Setting addon ingress-dns=true in "addons-772113"
	I0916 23:59:10.700782  146126 host.go:66] Checking if "addons-772113" exists ...
	I0916 23:59:10.700810  146126 host.go:66] Checking if "addons-772113" exists ...
	I0916 23:59:10.700933  146126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 23:59:10.700970  146126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 23:59:10.701181  146126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 23:59:10.701217  146126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 23:59:10.701261  146126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 23:59:10.701290  146126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 23:59:10.701309  146126 addons.go:69] Setting ingress=true in profile "addons-772113"
	I0916 23:59:10.701322  146126 addons.go:238] Setting addon ingress=true in "addons-772113"
	I0916 23:59:10.701356  146126 host.go:66] Checking if "addons-772113" exists ...
	I0916 23:59:10.701384  146126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 23:59:10.701409  146126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 23:59:10.701462  146126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 23:59:10.701486  146126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 23:59:10.701845  146126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 23:59:10.701897  146126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 23:59:10.702224  146126 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-772113"
	I0916 23:59:10.702297  146126 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-772113"
	I0916 23:59:10.702332  146126 host.go:66] Checking if "addons-772113" exists ...
	I0916 23:59:10.702506  146126 addons.go:69] Setting cloud-spanner=true in profile "addons-772113"
	I0916 23:59:10.702560  146126 addons.go:238] Setting addon cloud-spanner=true in "addons-772113"
	I0916 23:59:10.699677  146126 addons.go:238] Setting addon inspektor-gadget=true in "addons-772113"
	I0916 23:59:10.699662  146126 addons.go:238] Setting addon yakd=true in "addons-772113"
	I0916 23:59:10.702631  146126 host.go:66] Checking if "addons-772113" exists ...
	I0916 23:59:10.703128  146126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 23:59:10.703205  146126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 23:59:10.703605  146126 host.go:66] Checking if "addons-772113" exists ...
	I0916 23:59:10.704478  146126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 23:59:10.704522  146126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 23:59:10.704524  146126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 23:59:10.704561  146126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 23:59:10.705081  146126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 23:59:10.705128  146126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 23:59:10.706024  146126 addons.go:69] Setting metrics-server=true in profile "addons-772113"
	I0916 23:59:10.706046  146126 addons.go:238] Setting addon metrics-server=true in "addons-772113"
	I0916 23:59:10.706096  146126 host.go:66] Checking if "addons-772113" exists ...
	I0916 23:59:10.706602  146126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 23:59:10.706633  146126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 23:59:10.706283  146126 out.go:179] * Verifying Kubernetes components...
	I0916 23:59:10.707194  146126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 23:59:10.707235  146126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 23:59:10.703754  146126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 23:59:10.707618  146126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 23:59:10.708320  146126 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-772113"
	I0916 23:59:10.708344  146126 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-772113"
	I0916 23:59:10.708373  146126 host.go:66] Checking if "addons-772113" exists ...
	I0916 23:59:10.703974  146126 host.go:66] Checking if "addons-772113" exists ...
	I0916 23:59:10.708892  146126 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 23:59:10.709435  146126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 23:59:10.709506  146126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 23:59:10.700624  146126 config.go:182] Loaded profile config "addons-772113": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0916 23:59:10.719637  146126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 23:59:10.719683  146126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 23:59:10.720087  146126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 23:59:10.720366  146126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 23:59:10.721969  146126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 23:59:10.722076  146126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 23:59:10.723943  146126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36143
	I0916 23:59:10.725149  146126 main.go:141] libmachine: () Calling .GetVersion
	I0916 23:59:10.725895  146126 main.go:141] libmachine: Using API Version  1
	I0916 23:59:10.725917  146126 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 23:59:10.726381  146126 main.go:141] libmachine: () Calling .GetMachineName
	I0916 23:59:10.727186  146126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 23:59:10.727213  146126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 23:59:10.727444  146126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33951
	I0916 23:59:10.733389  146126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40031
	I0916 23:59:10.733688  146126 main.go:141] libmachine: () Calling .GetVersion
	I0916 23:59:10.733822  146126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38051
	I0916 23:59:10.737418  146126 main.go:141] libmachine: () Calling .GetVersion
	I0916 23:59:10.737969  146126 main.go:141] libmachine: () Calling .GetVersion
	I0916 23:59:10.738481  146126 main.go:141] libmachine: Using API Version  1
	I0916 23:59:10.738493  146126 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 23:59:10.738881  146126 main.go:141] libmachine: () Calling .GetMachineName
	I0916 23:59:10.739118  146126 main.go:141] libmachine: (addons-772113) Calling .GetState
	I0916 23:59:10.739162  146126 main.go:141] libmachine: Using API Version  1
	I0916 23:59:10.739383  146126 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 23:59:10.740972  146126 main.go:141] libmachine: () Calling .GetMachineName
	I0916 23:59:10.741137  146126 main.go:141] libmachine: Using API Version  1
	I0916 23:59:10.741149  146126 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 23:59:10.741225  146126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43035
	I0916 23:59:10.741823  146126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 23:59:10.741838  146126 main.go:141] libmachine: () Calling .GetVersion
	I0916 23:59:10.741842  146126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 23:59:10.742332  146126 main.go:141] libmachine: () Calling .GetMachineName
	I0916 23:59:10.742897  146126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 23:59:10.742920  146126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 23:59:10.744948  146126 main.go:141] libmachine: Using API Version  1
	I0916 23:59:10.744987  146126 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 23:59:10.745690  146126 main.go:141] libmachine: () Calling .GetMachineName
	I0916 23:59:10.746421  146126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 23:59:10.746443  146126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 23:59:10.746641  146126 addons.go:238] Setting addon default-storageclass=true in "addons-772113"
	I0916 23:59:10.746682  146126 host.go:66] Checking if "addons-772113" exists ...
	I0916 23:59:10.747103  146126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 23:59:10.747139  146126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 23:59:10.749835  146126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44537
	I0916 23:59:10.749892  146126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38469
	I0916 23:59:10.753065  146126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46041
	I0916 23:59:10.753236  146126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37735
	I0916 23:59:10.753720  146126 main.go:141] libmachine: () Calling .GetVersion
	I0916 23:59:10.753967  146126 main.go:141] libmachine: () Calling .GetVersion
	I0916 23:59:10.754308  146126 main.go:141] libmachine: () Calling .GetVersion
	I0916 23:59:10.754979  146126 main.go:141] libmachine: Using API Version  1
	I0916 23:59:10.755003  146126 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 23:59:10.755357  146126 main.go:141] libmachine: Using API Version  1
	I0916 23:59:10.755390  146126 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 23:59:10.755864  146126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36627
	I0916 23:59:10.756003  146126 main.go:141] libmachine: () Calling .GetVersion
	I0916 23:59:10.756346  146126 main.go:141] libmachine: () Calling .GetMachineName
	I0916 23:59:10.756541  146126 main.go:141] libmachine: () Calling .GetVersion
	I0916 23:59:10.757039  146126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 23:59:10.757060  146126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 23:59:10.757237  146126 main.go:141] libmachine: Using API Version  1
	I0916 23:59:10.757251  146126 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 23:59:10.757787  146126 main.go:141] libmachine: () Calling .GetMachineName
	I0916 23:59:10.758016  146126 main.go:141] libmachine: (addons-772113) Calling .GetState
	I0916 23:59:10.758680  146126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42607
	I0916 23:59:10.759027  146126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45405
	I0916 23:59:10.759422  146126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34345
	I0916 23:59:10.759603  146126 main.go:141] libmachine: () Calling .GetVersion
	I0916 23:59:10.759784  146126 main.go:141] libmachine: Using API Version  1
	I0916 23:59:10.759799  146126 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 23:59:10.760237  146126 main.go:141] libmachine: () Calling .GetMachineName
	I0916 23:59:10.760414  146126 main.go:141] libmachine: Using API Version  1
	I0916 23:59:10.760432  146126 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 23:59:10.760842  146126 main.go:141] libmachine: () Calling .GetMachineName
	I0916 23:59:10.760920  146126 main.go:141] libmachine: (addons-772113) Calling .DriverName
	I0916 23:59:10.760979  146126 main.go:141] libmachine: () Calling .GetVersion
	I0916 23:59:10.761135  146126 main.go:141] libmachine: Using API Version  1
	I0916 23:59:10.761153  146126 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 23:59:10.761424  146126 main.go:141] libmachine: Using API Version  1
	I0916 23:59:10.761444  146126 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 23:59:10.761513  146126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 23:59:10.761520  146126 main.go:141] libmachine: () Calling .GetMachineName
	I0916 23:59:10.761550  146126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 23:59:10.761783  146126 main.go:141] libmachine: (addons-772113) Calling .GetState
	I0916 23:59:10.761840  146126 main.go:141] libmachine: () Calling .GetMachineName
	I0916 23:59:10.762411  146126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 23:59:10.762446  146126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 23:59:10.762557  146126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 23:59:10.762580  146126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 23:59:10.764001  146126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37805
	I0916 23:59:10.764545  146126 main.go:141] libmachine: () Calling .GetVersion
	I0916 23:59:10.765950  146126 main.go:141] libmachine: Using API Version  1
	I0916 23:59:10.765970  146126 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 23:59:10.767180  146126 main.go:141] libmachine: () Calling .GetMachineName
	I0916 23:59:10.768067  146126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 23:59:10.773948  146126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 23:59:10.768128  146126 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0916 23:59:10.774308  146126 main.go:141] libmachine: () Calling .GetMachineName
	I0916 23:59:10.772543  146126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44081
	I0916 23:59:10.775295  146126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39483
	I0916 23:59:10.775533  146126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35271
	I0916 23:59:10.775780  146126 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0916 23:59:10.775802  146126 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0916 23:59:10.775825  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHHostname
	I0916 23:59:10.776055  146126 main.go:141] libmachine: () Calling .GetVersion
	I0916 23:59:10.776195  146126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43305
	I0916 23:59:10.776722  146126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 23:59:10.776772  146126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 23:59:10.777601  146126 main.go:141] libmachine: Using API Version  1
	I0916 23:59:10.777621  146126 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 23:59:10.777691  146126 main.go:141] libmachine: () Calling .GetVersion
	I0916 23:59:10.778524  146126 main.go:141] libmachine: Using API Version  1
	I0916 23:59:10.778544  146126 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 23:59:10.778632  146126 main.go:141] libmachine: () Calling .GetMachineName
	I0916 23:59:10.778748  146126 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-772113"
	I0916 23:59:10.778797  146126 host.go:66] Checking if "addons-772113" exists ...
	I0916 23:59:10.779177  146126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 23:59:10.779219  146126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 23:59:10.779383  146126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 23:59:10.779942  146126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 23:59:10.779879  146126 main.go:141] libmachine: () Calling .GetVersion
	I0916 23:59:10.780191  146126 main.go:141] libmachine: () Calling .GetMachineName
	I0916 23:59:10.780743  146126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 23:59:10.780765  146126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 23:59:10.780993  146126 main.go:141] libmachine: () Calling .GetVersion
	I0916 23:59:10.781032  146126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36021
	I0916 23:59:10.781615  146126 main.go:141] libmachine: Using API Version  1
	I0916 23:59:10.781676  146126 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 23:59:10.781786  146126 main.go:141] libmachine: Using API Version  1
	I0916 23:59:10.781800  146126 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 23:59:10.782114  146126 main.go:141] libmachine: () Calling .GetMachineName
	I0916 23:59:10.782315  146126 main.go:141] libmachine: () Calling .GetVersion
	I0916 23:59:10.782890  146126 main.go:141] libmachine: Using API Version  1
	I0916 23:59:10.782906  146126 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 23:59:10.782961  146126 main.go:141] libmachine: () Calling .GetMachineName
	I0916 23:59:10.783234  146126 main.go:141] libmachine: (addons-772113) Calling .GetState
	I0916 23:59:10.783517  146126 main.go:141] libmachine: (addons-772113) Calling .GetState
	I0916 23:59:10.784649  146126 main.go:141] libmachine: () Calling .GetMachineName
	I0916 23:59:10.785330  146126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 23:59:10.785369  146126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 23:59:10.785602  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:59:10.787478  146126 main.go:141] libmachine: (addons-772113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:9c:db", ip: ""} in network mk-addons-772113: {Iface:virbr2 ExpiryTime:2025-09-17 00:58:40 +0000 UTC Type:0 Mac:52:54:00:1a:9c:db Iaid: IPaddr:192.168.50.205 Prefix:24 Hostname:addons-772113 Clientid:01:52:54:00:1a:9c:db}
	I0916 23:59:10.788027  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined IP address 192.168.50.205 and MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:59:10.788731  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHPort
	I0916 23:59:10.789958  146126 host.go:66] Checking if "addons-772113" exists ...
	I0916 23:59:10.790348  146126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 23:59:10.790396  146126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 23:59:10.790648  146126 main.go:141] libmachine: (addons-772113) Calling .DriverName
	I0916 23:59:10.791987  146126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37097
	I0916 23:59:10.792493  146126 main.go:141] libmachine: () Calling .GetVersion
	I0916 23:59:10.792740  146126 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0916 23:59:10.792844  146126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40433
	I0916 23:59:10.793531  146126 main.go:141] libmachine: () Calling .GetVersion
	I0916 23:59:10.794079  146126 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0916 23:59:10.794096  146126 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0916 23:59:10.794136  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHHostname
	I0916 23:59:10.795117  146126 main.go:141] libmachine: () Calling .GetVersion
	I0916 23:59:10.795962  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHKeyPath
	I0916 23:59:10.796198  146126 main.go:141] libmachine: Using API Version  1
	I0916 23:59:10.796229  146126 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 23:59:10.796289  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHUsername
	I0916 23:59:10.796648  146126 main.go:141] libmachine: Using API Version  1
	I0916 23:59:10.796676  146126 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 23:59:10.797213  146126 main.go:141] libmachine: () Calling .GetMachineName
	I0916 23:59:10.796294  146126 main.go:141] libmachine: Using API Version  1
	I0916 23:59:10.797505  146126 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 23:59:10.797989  146126 main.go:141] libmachine: () Calling .GetMachineName
	I0916 23:59:10.798005  146126 main.go:141] libmachine: (addons-772113) Calling .GetState
	I0916 23:59:10.798784  146126 main.go:141] libmachine: () Calling .GetMachineName
	I0916 23:59:10.799180  146126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34439
	I0916 23:59:10.799409  146126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 23:59:10.799447  146126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 23:59:10.799913  146126 main.go:141] libmachine: () Calling .GetVersion
	I0916 23:59:10.799930  146126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 23:59:10.800004  146126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 23:59:10.800751  146126 main.go:141] libmachine: Using API Version  1
	I0916 23:59:10.800778  146126 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 23:59:10.801207  146126 main.go:141] libmachine: () Calling .GetMachineName
	I0916 23:59:10.801483  146126 main.go:141] libmachine: (addons-772113) Calling .GetState
	I0916 23:59:10.801148  146126 sshutil.go:53] new ssh client: &{IP:192.168.50.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21550-141589/.minikube/machines/addons-772113/id_rsa Username:docker}
	I0916 23:59:10.804952  146126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39787
	I0916 23:59:10.805780  146126 main.go:141] libmachine: () Calling .GetVersion
	I0916 23:59:10.806401  146126 main.go:141] libmachine: Using API Version  1
	I0916 23:59:10.806419  146126 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 23:59:10.806895  146126 main.go:141] libmachine: () Calling .GetMachineName
	I0916 23:59:10.807047  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:59:10.807094  146126 main.go:141] libmachine: (addons-772113) Calling .GetState
	I0916 23:59:10.809352  146126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36769
	I0916 23:59:10.809581  146126 main.go:141] libmachine: (addons-772113) Calling .DriverName
	I0916 23:59:10.811673  146126 main.go:141] libmachine: (addons-772113) Calling .DriverName
	I0916 23:59:10.811694  146126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41911
	I0916 23:59:10.811806  146126 main.go:141] libmachine: (addons-772113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:9c:db", ip: ""} in network mk-addons-772113: {Iface:virbr2 ExpiryTime:2025-09-17 00:58:40 +0000 UTC Type:0 Mac:52:54:00:1a:9c:db Iaid: IPaddr:192.168.50.205 Prefix:24 Hostname:addons-772113 Clientid:01:52:54:00:1a:9c:db}
	I0916 23:59:10.811826  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined IP address 192.168.50.205 and MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:59:10.812186  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHPort
	I0916 23:59:10.812409  146126 main.go:141] libmachine: () Calling .GetVersion
	I0916 23:59:10.812447  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHKeyPath
	I0916 23:59:10.812623  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHUsername
	I0916 23:59:10.812761  146126 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0916 23:59:10.812778  146126 sshutil.go:53] new ssh client: &{IP:192.168.50.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21550-141589/.minikube/machines/addons-772113/id_rsa Username:docker}
	I0916 23:59:10.812983  146126 main.go:141] libmachine: Using API Version  1
	I0916 23:59:10.813014  146126 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 23:59:10.813484  146126 main.go:141] libmachine: () Calling .GetMachineName
	I0916 23:59:10.813787  146126 main.go:141] libmachine: (addons-772113) Calling .GetState
	I0916 23:59:10.814933  146126 main.go:141] libmachine: () Calling .GetVersion
	I0916 23:59:10.815067  146126 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.3
	I0916 23:59:10.815155  146126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35779
	I0916 23:59:10.815239  146126 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0916 23:59:10.815262  146126 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0916 23:59:10.815283  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHHostname
	I0916 23:59:10.815452  146126 main.go:141] libmachine: Using API Version  1
	I0916 23:59:10.815467  146126 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 23:59:10.815767  146126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36609
	I0916 23:59:10.816462  146126 main.go:141] libmachine: () Calling .GetVersion
	I0916 23:59:10.817202  146126 main.go:141] libmachine: Using API Version  1
	I0916 23:59:10.817226  146126 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 23:59:10.817658  146126 main.go:141] libmachine: () Calling .GetMachineName
	I0916 23:59:10.817908  146126 main.go:141] libmachine: () Calling .GetMachineName
	I0916 23:59:10.817963  146126 main.go:141] libmachine: (addons-772113) Calling .GetState
	I0916 23:59:10.817970  146126 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 23:59:10.817986  146126 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0916 23:59:10.818007  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHHostname
	I0916 23:59:10.819321  146126 main.go:141] libmachine: (addons-772113) Calling .GetState
	I0916 23:59:10.822293  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:59:10.823624  146126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40349
	I0916 23:59:10.823625  146126 main.go:141] libmachine: (addons-772113) Calling .DriverName
	I0916 23:59:10.823819  146126 main.go:141] libmachine: (addons-772113) Calling .DriverName
	I0916 23:59:10.823903  146126 main.go:141] libmachine: (addons-772113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:9c:db", ip: ""} in network mk-addons-772113: {Iface:virbr2 ExpiryTime:2025-09-17 00:58:40 +0000 UTC Type:0 Mac:52:54:00:1a:9c:db Iaid: IPaddr:192.168.50.205 Prefix:24 Hostname:addons-772113 Clientid:01:52:54:00:1a:9c:db}
	I0916 23:59:10.824006  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined IP address 192.168.50.205 and MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:59:10.824219  146126 main.go:141] libmachine: (addons-772113) Calling .DriverName
	I0916 23:59:10.824415  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHPort
	I0916 23:59:10.825026  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHKeyPath
	I0916 23:59:10.825712  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHUsername
	I0916 23:59:10.825828  146126 main.go:141] libmachine: (addons-772113) Calling .DriverName
	I0916 23:59:10.826048  146126 sshutil.go:53] new ssh client: &{IP:192.168.50.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21550-141589/.minikube/machines/addons-772113/id_rsa Username:docker}
	I0916 23:59:10.826372  146126 main.go:141] libmachine: Making call to close driver server
	I0916 23:59:10.826425  146126 main.go:141] libmachine: (addons-772113) Calling .Close
	I0916 23:59:10.826887  146126 main.go:141] libmachine: (addons-772113) DBG | Closing plugin on server side
	I0916 23:59:10.826921  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:59:10.826931  146126 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0916 23:59:10.827421  146126 main.go:141] libmachine: () Calling .GetVersion
	I0916 23:59:10.827884  146126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46227
	I0916 23:59:10.827897  146126 main.go:141] libmachine: (addons-772113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:9c:db", ip: ""} in network mk-addons-772113: {Iface:virbr2 ExpiryTime:2025-09-17 00:58:40 +0000 UTC Type:0 Mac:52:54:00:1a:9c:db Iaid: IPaddr:192.168.50.205 Prefix:24 Hostname:addons-772113 Clientid:01:52:54:00:1a:9c:db}
	I0916 23:59:10.827917  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined IP address 192.168.50.205 and MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:59:10.828168  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHPort
	I0916 23:59:10.828231  146126 main.go:141] libmachine: () Calling .GetVersion
	I0916 23:59:10.828360  146126 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0916 23:59:10.828998  146126 main.go:141] libmachine: Using API Version  1
	I0916 23:59:10.829017  146126 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 23:59:10.829091  146126 main.go:141] libmachine: () Calling .GetVersion
	I0916 23:59:10.829275  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHKeyPath
	I0916 23:59:10.829333  146126 main.go:141] libmachine: Using API Version  1
	I0916 23:59:10.829434  146126 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 23:59:10.829514  146126 main.go:141] libmachine: Failed to make call to close driver server: unexpected EOF
	I0916 23:59:10.829543  146126 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 23:59:10.829571  146126 main.go:141] libmachine: Making call to close driver server
	I0916 23:59:10.829617  146126 out.go:179]   - Using image docker.io/registry:3.0.0
	I0916 23:59:10.829616  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHUsername
	I0916 23:59:10.829624  146126 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I0916 23:59:10.829780  146126 main.go:141] libmachine: (addons-772113) Calling .Close
	I0916 23:59:10.829822  146126 sshutil.go:53] new ssh client: &{IP:192.168.50.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21550-141589/.minikube/machines/addons-772113/id_rsa Username:docker}
	I0916 23:59:10.830016  146126 main.go:141] libmachine: Successfully made call to close driver server
	I0916 23:59:10.830051  146126 main.go:141] libmachine: Making call to close connection to plugin binary
	W0916 23:59:10.830186  146126 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0916 23:59:10.830276  146126 main.go:141] libmachine: () Calling .GetMachineName
	I0916 23:59:10.830705  146126 main.go:141] libmachine: Using API Version  1
	I0916 23:59:10.830725  146126 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 23:59:10.830732  146126 main.go:141] libmachine: () Calling .GetMachineName
	I0916 23:59:10.830963  146126 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0916 23:59:10.830972  146126 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0916 23:59:10.831021  146126 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0916 23:59:10.831034  146126 main.go:141] libmachine: (addons-772113) Calling .GetState
	I0916 23:59:10.831040  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHHostname
	I0916 23:59:10.831194  146126 main.go:141] libmachine: () Calling .GetMachineName
	I0916 23:59:10.831490  146126 main.go:141] libmachine: (addons-772113) Calling .DriverName
	I0916 23:59:10.831947  146126 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0916 23:59:10.831960  146126 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I0916 23:59:10.831976  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHHostname
	I0916 23:59:10.832045  146126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35387
	I0916 23:59:10.832818  146126 main.go:141] libmachine: () Calling .GetVersion
	I0916 23:59:10.833128  146126 main.go:141] libmachine: (addons-772113) Calling .GetState
	I0916 23:59:10.833615  146126 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0916 23:59:10.834186  146126 main.go:141] libmachine: (addons-772113) Calling .DriverName
	I0916 23:59:10.834182  146126 main.go:141] libmachine: Using API Version  1
	I0916 23:59:10.834229  146126 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 23:59:10.834704  146126 main.go:141] libmachine: () Calling .GetMachineName
	I0916 23:59:10.835435  146126 main.go:141] libmachine: (addons-772113) Calling .GetState
	I0916 23:59:10.835817  146126 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0916 23:59:10.835887  146126 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0916 23:59:10.836211  146126 main.go:141] libmachine: (addons-772113) Calling .DriverName
	I0916 23:59:10.837635  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:59:10.837867  146126 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 23:59:10.837830  146126 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0916 23:59:10.838004  146126 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0916 23:59:10.838025  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHHostname
	I0916 23:59:10.839053  146126 main.go:141] libmachine: (addons-772113) Calling .DriverName
	I0916 23:59:10.839346  146126 main.go:141] libmachine: (addons-772113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:9c:db", ip: ""} in network mk-addons-772113: {Iface:virbr2 ExpiryTime:2025-09-17 00:58:40 +0000 UTC Type:0 Mac:52:54:00:1a:9c:db Iaid: IPaddr:192.168.50.205 Prefix:24 Hostname:addons-772113 Clientid:01:52:54:00:1a:9c:db}
	I0916 23:59:10.839383  146126 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 23:59:10.839406  146126 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 23:59:10.839386  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined IP address 192.168.50.205 and MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:59:10.839442  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHHostname
	I0916 23:59:10.839510  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHPort
	I0916 23:59:10.840209  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHKeyPath
	I0916 23:59:10.840331  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:59:10.840504  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHUsername
	I0916 23:59:10.840560  146126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37719
	I0916 23:59:10.840924  146126 sshutil.go:53] new ssh client: &{IP:192.168.50.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21550-141589/.minikube/machines/addons-772113/id_rsa Username:docker}
	I0916 23:59:10.841354  146126 main.go:141] libmachine: () Calling .GetVersion
	I0916 23:59:10.841431  146126 main.go:141] libmachine: (addons-772113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:9c:db", ip: ""} in network mk-addons-772113: {Iface:virbr2 ExpiryTime:2025-09-17 00:58:40 +0000 UTC Type:0 Mac:52:54:00:1a:9c:db Iaid: IPaddr:192.168.50.205 Prefix:24 Hostname:addons-772113 Clientid:01:52:54:00:1a:9c:db}
	I0916 23:59:10.841450  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined IP address 192.168.50.205 and MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:59:10.841777  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHPort
	I0916 23:59:10.842053  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHKeyPath
	I0916 23:59:10.842176  146126 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0916 23:59:10.842329  146126 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 23:59:10.842343  146126 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 23:59:10.842381  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHHostname
	I0916 23:59:10.842408  146126 main.go:141] libmachine: Using API Version  1
	I0916 23:59:10.842422  146126 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 23:59:10.842896  146126 main.go:141] libmachine: () Calling .GetMachineName
	I0916 23:59:10.843043  146126 main.go:141] libmachine: (addons-772113) Calling .GetState
	I0916 23:59:10.843712  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:59:10.843774  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHUsername
	I0916 23:59:10.843987  146126 sshutil.go:53] new ssh client: &{IP:192.168.50.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21550-141589/.minikube/machines/addons-772113/id_rsa Username:docker}
	I0916 23:59:10.845212  146126 main.go:141] libmachine: (addons-772113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:9c:db", ip: ""} in network mk-addons-772113: {Iface:virbr2 ExpiryTime:2025-09-17 00:58:40 +0000 UTC Type:0 Mac:52:54:00:1a:9c:db Iaid: IPaddr:192.168.50.205 Prefix:24 Hostname:addons-772113 Clientid:01:52:54:00:1a:9c:db}
	I0916 23:59:10.845284  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined IP address 192.168.50.205 and MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:59:10.845484  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHPort
	I0916 23:59:10.845663  146126 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0916 23:59:10.845711  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHKeyPath
	I0916 23:59:10.845976  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHUsername
	I0916 23:59:10.846161  146126 sshutil.go:53] new ssh client: &{IP:192.168.50.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21550-141589/.minikube/machines/addons-772113/id_rsa Username:docker}
	I0916 23:59:10.846576  146126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38237
	I0916 23:59:10.846792  146126 main.go:141] libmachine: (addons-772113) Calling .DriverName
	I0916 23:59:10.847712  146126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36029
	I0916 23:59:10.847776  146126 main.go:141] libmachine: () Calling .GetVersion
	I0916 23:59:10.847927  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:59:10.848017  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:59:10.848140  146126 main.go:141] libmachine: () Calling .GetVersion
	I0916 23:59:10.848208  146126 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0916 23:59:10.848275  146126 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0916 23:59:10.848536  146126 main.go:141] libmachine: Using API Version  1
	I0916 23:59:10.848568  146126 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 23:59:10.848624  146126 main.go:141] libmachine: (addons-772113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:9c:db", ip: ""} in network mk-addons-772113: {Iface:virbr2 ExpiryTime:2025-09-17 00:58:40 +0000 UTC Type:0 Mac:52:54:00:1a:9c:db Iaid: IPaddr:192.168.50.205 Prefix:24 Hostname:addons-772113 Clientid:01:52:54:00:1a:9c:db}
	I0916 23:59:10.848703  146126 main.go:141] libmachine: Using API Version  1
	I0916 23:59:10.848719  146126 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 23:59:10.849463  146126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37025
	I0916 23:59:10.849530  146126 main.go:141] libmachine: (addons-772113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:9c:db", ip: ""} in network mk-addons-772113: {Iface:virbr2 ExpiryTime:2025-09-17 00:58:40 +0000 UTC Type:0 Mac:52:54:00:1a:9c:db Iaid: IPaddr:192.168.50.205 Prefix:24 Hostname:addons-772113 Clientid:01:52:54:00:1a:9c:db}
	I0916 23:59:10.849540  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHPort
	I0916 23:59:10.849570  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined IP address 192.168.50.205 and MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:59:10.849491  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHPort
	I0916 23:59:10.849526  146126 main.go:141] libmachine: () Calling .GetMachineName
	I0916 23:59:10.849676  146126 main.go:141] libmachine: () Calling .GetMachineName
	I0916 23:59:10.849706  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHKeyPath
	I0916 23:59:10.849936  146126 main.go:141] libmachine: () Calling .GetVersion
	I0916 23:59:10.850097  146126 main.go:141] libmachine: (addons-772113) Calling .GetState
	I0916 23:59:10.850194  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHUsername
	I0916 23:59:10.850291  146126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 23:59:10.850347  146126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 23:59:10.850432  146126 main.go:141] libmachine: Using API Version  1
	I0916 23:59:10.850453  146126 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 23:59:10.850425  146126 sshutil.go:53] new ssh client: &{IP:192.168.50.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21550-141589/.minikube/machines/addons-772113/id_rsa Username:docker}
	I0916 23:59:10.850586  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHKeyPath
	I0916 23:59:10.850766  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHUsername
	I0916 23:59:10.850941  146126 sshutil.go:53] new ssh client: &{IP:192.168.50.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21550-141589/.minikube/machines/addons-772113/id_rsa Username:docker}
	I0916 23:59:10.850984  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined IP address 192.168.50.205 and MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:59:10.850960  146126 main.go:141] libmachine: () Calling .GetMachineName
	I0916 23:59:10.851242  146126 main.go:141] libmachine: (addons-772113) Calling .GetState
	I0916 23:59:10.851561  146126 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0916 23:59:10.851622  146126 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0916 23:59:10.852400  146126 main.go:141] libmachine: (addons-772113) Calling .DriverName
	I0916 23:59:10.853981  146126 main.go:141] libmachine: (addons-772113) Calling .DriverName
	I0916 23:59:10.854465  146126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38371
	I0916 23:59:10.854875  146126 main.go:141] libmachine: () Calling .GetVersion
	I0916 23:59:10.855441  146126 main.go:141] libmachine: Using API Version  1
	I0916 23:59:10.855468  146126 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 23:59:10.855810  146126 main.go:141] libmachine: () Calling .GetMachineName
	I0916 23:59:10.856007  146126 main.go:141] libmachine: (addons-772113) Calling .GetState
	I0916 23:59:10.856410  146126 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0916 23:59:10.856482  146126 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0916 23:59:10.856520  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHHostname
	I0916 23:59:10.857236  146126 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I0916 23:59:10.857273  146126 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I0916 23:59:10.858403  146126 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I0916 23:59:10.858409  146126 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0916 23:59:10.858475  146126 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0916 23:59:10.858534  146126 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0916 23:59:10.858538  146126 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0916 23:59:10.858556  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHHostname
	I0916 23:59:10.858556  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHHostname
	I0916 23:59:10.858679  146126 main.go:141] libmachine: (addons-772113) Calling .DriverName
	I0916 23:59:10.859947  146126 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I0916 23:59:10.860205  146126 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0916 23:59:10.860222  146126 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0916 23:59:10.860243  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHHostname
	I0916 23:59:10.861202  146126 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0916 23:59:10.861230  146126 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I0916 23:59:10.861253  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHHostname
	I0916 23:59:10.862673  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:59:10.863820  146126 main.go:141] libmachine: (addons-772113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:9c:db", ip: ""} in network mk-addons-772113: {Iface:virbr2 ExpiryTime:2025-09-17 00:58:40 +0000 UTC Type:0 Mac:52:54:00:1a:9c:db Iaid: IPaddr:192.168.50.205 Prefix:24 Hostname:addons-772113 Clientid:01:52:54:00:1a:9c:db}
	I0916 23:59:10.863911  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined IP address 192.168.50.205 and MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:59:10.864180  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHPort
	I0916 23:59:10.864388  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHKeyPath
	I0916 23:59:10.864667  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHUsername
	I0916 23:59:10.864886  146126 sshutil.go:53] new ssh client: &{IP:192.168.50.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21550-141589/.minikube/machines/addons-772113/id_rsa Username:docker}
	I0916 23:59:10.865554  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:59:10.865593  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:59:10.866022  146126 main.go:141] libmachine: (addons-772113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:9c:db", ip: ""} in network mk-addons-772113: {Iface:virbr2 ExpiryTime:2025-09-17 00:58:40 +0000 UTC Type:0 Mac:52:54:00:1a:9c:db Iaid: IPaddr:192.168.50.205 Prefix:24 Hostname:addons-772113 Clientid:01:52:54:00:1a:9c:db}
	I0916 23:59:10.866073  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined IP address 192.168.50.205 and MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:59:10.866269  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHPort
	I0916 23:59:10.866433  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHKeyPath
	I0916 23:59:10.866624  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:59:10.866659  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHUsername
	I0916 23:59:10.866702  146126 main.go:141] libmachine: (addons-772113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:9c:db", ip: ""} in network mk-addons-772113: {Iface:virbr2 ExpiryTime:2025-09-17 00:58:40 +0000 UTC Type:0 Mac:52:54:00:1a:9c:db Iaid: IPaddr:192.168.50.205 Prefix:24 Hostname:addons-772113 Clientid:01:52:54:00:1a:9c:db}
	I0916 23:59:10.866736  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined IP address 192.168.50.205 and MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:59:10.866814  146126 sshutil.go:53] new ssh client: &{IP:192.168.50.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21550-141589/.minikube/machines/addons-772113/id_rsa Username:docker}
	I0916 23:59:10.866938  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHPort
	I0916 23:59:10.867065  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHKeyPath
	I0916 23:59:10.867213  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHUsername
	I0916 23:59:10.867274  146126 main.go:141] libmachine: (addons-772113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:9c:db", ip: ""} in network mk-addons-772113: {Iface:virbr2 ExpiryTime:2025-09-17 00:58:40 +0000 UTC Type:0 Mac:52:54:00:1a:9c:db Iaid: IPaddr:192.168.50.205 Prefix:24 Hostname:addons-772113 Clientid:01:52:54:00:1a:9c:db}
	I0916 23:59:10.867297  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:59:10.867333  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined IP address 192.168.50.205 and MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:59:10.867474  146126 sshutil.go:53] new ssh client: &{IP:192.168.50.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21550-141589/.minikube/machines/addons-772113/id_rsa Username:docker}
	I0916 23:59:10.867617  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHPort
	I0916 23:59:10.867806  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHKeyPath
	I0916 23:59:10.867968  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHUsername
	I0916 23:59:10.867973  146126 main.go:141] libmachine: (addons-772113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:9c:db", ip: ""} in network mk-addons-772113: {Iface:virbr2 ExpiryTime:2025-09-17 00:58:40 +0000 UTC Type:0 Mac:52:54:00:1a:9c:db Iaid: IPaddr:192.168.50.205 Prefix:24 Hostname:addons-772113 Clientid:01:52:54:00:1a:9c:db}
	I0916 23:59:10.868035  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined IP address 192.168.50.205 and MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:59:10.868146  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHPort
	I0916 23:59:10.868147  146126 sshutil.go:53] new ssh client: &{IP:192.168.50.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21550-141589/.minikube/machines/addons-772113/id_rsa Username:docker}
	I0916 23:59:10.868315  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHKeyPath
	I0916 23:59:10.868444  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHUsername
	I0916 23:59:10.868573  146126 sshutil.go:53] new ssh client: &{IP:192.168.50.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21550-141589/.minikube/machines/addons-772113/id_rsa Username:docker}
	I0916 23:59:10.873303  146126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33755
	I0916 23:59:10.873765  146126 main.go:141] libmachine: () Calling .GetVersion
	I0916 23:59:10.874286  146126 main.go:141] libmachine: Using API Version  1
	I0916 23:59:10.874310  146126 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 23:59:10.874664  146126 main.go:141] libmachine: () Calling .GetMachineName
	I0916 23:59:10.874918  146126 main.go:141] libmachine: (addons-772113) Calling .GetState
	I0916 23:59:10.876802  146126 main.go:141] libmachine: (addons-772113) Calling .DriverName
	I0916 23:59:10.878802  146126 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0916 23:59:10.880361  146126 out.go:179]   - Using image docker.io/busybox:stable
	I0916 23:59:10.881715  146126 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 23:59:10.881734  146126 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0916 23:59:10.881756  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHHostname
	I0916 23:59:10.884974  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:59:10.885435  146126 main.go:141] libmachine: (addons-772113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:9c:db", ip: ""} in network mk-addons-772113: {Iface:virbr2 ExpiryTime:2025-09-17 00:58:40 +0000 UTC Type:0 Mac:52:54:00:1a:9c:db Iaid: IPaddr:192.168.50.205 Prefix:24 Hostname:addons-772113 Clientid:01:52:54:00:1a:9c:db}
	I0916 23:59:10.885485  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined IP address 192.168.50.205 and MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:59:10.885648  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHPort
	I0916 23:59:10.885879  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHKeyPath
	I0916 23:59:10.886039  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHUsername
	I0916 23:59:10.886166  146126 sshutil.go:53] new ssh client: &{IP:192.168.50.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21550-141589/.minikube/machines/addons-772113/id_rsa Username:docker}
	W0916 23:59:11.030994  146126 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.50.1:35214->192.168.50.205:22: read: connection reset by peer
	I0916 23:59:11.031036  146126 retry.go:31] will retry after 252.823878ms: ssh: handshake failed: read tcp 192.168.50.1:35214->192.168.50.205:22: read: connection reset by peer
	W0916 23:59:11.062457  146126 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.50.1:35240->192.168.50.205:22: read: connection reset by peer
	I0916 23:59:11.062497  146126 retry.go:31] will retry after 367.220728ms: ssh: handshake failed: read tcp 192.168.50.1:35240->192.168.50.205:22: read: connection reset by peer
	W0916 23:59:11.062584  146126 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.50.1:35250->192.168.50.205:22: read: connection reset by peer
	I0916 23:59:11.062599  146126 retry.go:31] will retry after 174.907443ms: ssh: handshake failed: read tcp 192.168.50.1:35250->192.168.50.205:22: read: connection reset by peer
	W0916 23:59:11.062631  146126 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.50.1:35248->192.168.50.205:22: read: connection reset by peer
	I0916 23:59:11.062648  146126 retry.go:31] will retry after 263.839519ms: ssh: handshake failed: read tcp 192.168.50.1:35248->192.168.50.205:22: read: connection reset by peer
	I0916 23:59:11.342939  146126 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 23:59:11.357472  146126 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0916 23:59:11.357504  146126 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0916 23:59:11.391869  146126 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 23:59:11.391960  146126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 23:59:11.501203  146126 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0916 23:59:11.501235  146126 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0916 23:59:11.571935  146126 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 23:59:11.607066  146126 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0916 23:59:11.721623  146126 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0916 23:59:11.748530  146126 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 23:59:11.763190  146126 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0916 23:59:11.785632  146126 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0916 23:59:11.785671  146126 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0916 23:59:11.793790  146126 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0916 23:59:11.793828  146126 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0916 23:59:11.798145  146126 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0916 23:59:11.888548  146126 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0916 23:59:11.888581  146126 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0916 23:59:11.955819  146126 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0916 23:59:11.955849  146126 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0916 23:59:12.088990  146126 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 23:59:12.201692  146126 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0916 23:59:12.201717  146126 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0916 23:59:12.325556  146126 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:59:12.325594  146126 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I0916 23:59:12.385116  146126 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0916 23:59:12.385152  146126 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0916 23:59:12.491594  146126 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0916 23:59:12.514171  146126 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0916 23:59:12.514209  146126 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0916 23:59:12.714133  146126 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0916 23:59:12.815124  146126 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0916 23:59:12.815157  146126 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0916 23:59:12.932818  146126 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0916 23:59:12.932863  146126 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0916 23:59:13.036130  146126 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:59:13.133888  146126 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 23:59:13.133919  146126 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0916 23:59:13.189212  146126 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0916 23:59:13.189241  146126 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0916 23:59:13.300756  146126 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0916 23:59:13.300790  146126 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0916 23:59:13.400350  146126 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0916 23:59:13.400382  146126 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0916 23:59:13.519837  146126 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 23:59:13.563108  146126 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0916 23:59:13.563150  146126 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0916 23:59:13.770107  146126 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0916 23:59:13.770135  146126 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0916 23:59:13.796277  146126 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0916 23:59:13.796312  146126 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0916 23:59:14.094623  146126 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 23:59:14.094656  146126 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0916 23:59:14.095971  146126 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.752987265s)
	I0916 23:59:14.096034  146126 main.go:141] libmachine: Making call to close driver server
	I0916 23:59:14.096047  146126 main.go:141] libmachine: (addons-772113) Calling .Close
	I0916 23:59:14.096381  146126 main.go:141] libmachine: Successfully made call to close driver server
	I0916 23:59:14.096401  146126 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 23:59:14.096426  146126 main.go:141] libmachine: Making call to close driver server
	I0916 23:59:14.096437  146126 main.go:141] libmachine: (addons-772113) Calling .Close
	I0916 23:59:14.096760  146126 main.go:141] libmachine: Successfully made call to close driver server
	I0916 23:59:14.096783  146126 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 23:59:14.112448  146126 main.go:141] libmachine: Making call to close driver server
	I0916 23:59:14.112477  146126 main.go:141] libmachine: (addons-772113) Calling .Close
	I0916 23:59:14.112872  146126 main.go:141] libmachine: Successfully made call to close driver server
	I0916 23:59:14.112899  146126 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 23:59:14.173835  146126 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0916 23:59:14.262249  146126 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0916 23:59:14.262277  146126 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0916 23:59:14.514166  146126 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 23:59:14.742871  146126 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0916 23:59:14.742909  146126 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0916 23:59:15.095588  146126 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.703584957s)
	I0916 23:59:15.095631  146126 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.703719869s)
	I0916 23:59:15.095680  146126 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.523712806s)
	I0916 23:59:15.095638  146126 start.go:976] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0916 23:59:15.095724  146126 main.go:141] libmachine: Making call to close driver server
	I0916 23:59:15.095743  146126 main.go:141] libmachine: (addons-772113) Calling .Close
	I0916 23:59:15.095723  146126 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.488625835s)
	I0916 23:59:15.095822  146126 main.go:141] libmachine: Making call to close driver server
	I0916 23:59:15.095834  146126 main.go:141] libmachine: (addons-772113) Calling .Close
	I0916 23:59:15.096110  146126 main.go:141] libmachine: Successfully made call to close driver server
	I0916 23:59:15.096122  146126 main.go:141] libmachine: Successfully made call to close driver server
	I0916 23:59:15.096126  146126 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 23:59:15.096131  146126 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 23:59:15.096136  146126 main.go:141] libmachine: Making call to close driver server
	I0916 23:59:15.096140  146126 main.go:141] libmachine: Making call to close driver server
	I0916 23:59:15.096144  146126 main.go:141] libmachine: (addons-772113) Calling .Close
	I0916 23:59:15.096148  146126 main.go:141] libmachine: (addons-772113) Calling .Close
	I0916 23:59:15.096486  146126 main.go:141] libmachine: (addons-772113) DBG | Closing plugin on server side
	I0916 23:59:15.096512  146126 main.go:141] libmachine: (addons-772113) DBG | Closing plugin on server side
	I0916 23:59:15.096590  146126 node_ready.go:35] waiting up to 6m0s for node "addons-772113" to be "Ready" ...
	I0916 23:59:15.096722  146126 main.go:141] libmachine: Successfully made call to close driver server
	I0916 23:59:15.096731  146126 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 23:59:15.096843  146126 main.go:141] libmachine: Successfully made call to close driver server
	I0916 23:59:15.096875  146126 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 23:59:15.107568  146126 node_ready.go:49] node "addons-772113" is "Ready"
	I0916 23:59:15.107600  146126 node_ready.go:38] duration metric: took 10.990405ms for node "addons-772113" to be "Ready" ...
	I0916 23:59:15.107615  146126 api_server.go:52] waiting for apiserver process to appear ...
	I0916 23:59:15.107669  146126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 23:59:15.338012  146126 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0916 23:59:15.338053  146126 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0916 23:59:15.601393  146126 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-772113" context rescaled to 1 replicas
	I0916 23:59:15.828193  146126 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0916 23:59:15.828221  146126 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0916 23:59:16.529800  146126 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 23:59:16.529831  146126 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0916 23:59:16.842312  146126 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 23:59:17.803465  146126 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.081795832s)
	I0916 23:59:17.803531  146126 main.go:141] libmachine: Making call to close driver server
	I0916 23:59:17.803544  146126 main.go:141] libmachine: (addons-772113) Calling .Close
	I0916 23:59:17.803554  146126 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.054987228s)
	I0916 23:59:17.803604  146126 main.go:141] libmachine: Making call to close driver server
	I0916 23:59:17.803625  146126 main.go:141] libmachine: (addons-772113) Calling .Close
	I0916 23:59:17.803687  146126 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (6.040458308s)
	I0916 23:59:17.803737  146126 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (6.005561936s)
	I0916 23:59:17.803743  146126 main.go:141] libmachine: Making call to close driver server
	I0916 23:59:17.803757  146126 main.go:141] libmachine: (addons-772113) Calling .Close
	I0916 23:59:17.803765  146126 main.go:141] libmachine: Making call to close driver server
	I0916 23:59:17.803775  146126 main.go:141] libmachine: (addons-772113) Calling .Close
	I0916 23:59:17.803842  146126 main.go:141] libmachine: (addons-772113) DBG | Closing plugin on server side
	I0916 23:59:17.803874  146126 main.go:141] libmachine: Successfully made call to close driver server
	I0916 23:59:17.803889  146126 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 23:59:17.803898  146126 main.go:141] libmachine: Making call to close driver server
	I0916 23:59:17.803932  146126 main.go:141] libmachine: (addons-772113) Calling .Close
	I0916 23:59:17.803981  146126 main.go:141] libmachine: (addons-772113) DBG | Closing plugin on server side
	I0916 23:59:17.804002  146126 main.go:141] libmachine: Successfully made call to close driver server
	I0916 23:59:17.804013  146126 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 23:59:17.804028  146126 main.go:141] libmachine: Making call to close driver server
	I0916 23:59:17.804034  146126 main.go:141] libmachine: (addons-772113) Calling .Close
	I0916 23:59:17.804111  146126 main.go:141] libmachine: Successfully made call to close driver server
	I0916 23:59:17.804121  146126 main.go:141] libmachine: (addons-772113) DBG | Closing plugin on server side
	I0916 23:59:17.804123  146126 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 23:59:17.804164  146126 main.go:141] libmachine: Successfully made call to close driver server
	I0916 23:59:17.804180  146126 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 23:59:17.804197  146126 main.go:141] libmachine: (addons-772113) DBG | Closing plugin on server side
	I0916 23:59:17.804211  146126 main.go:141] libmachine: Making call to close driver server
	I0916 23:59:17.804235  146126 main.go:141] libmachine: (addons-772113) Calling .Close
	I0916 23:59:17.804335  146126 main.go:141] libmachine: (addons-772113) DBG | Closing plugin on server side
	I0916 23:59:17.804358  146126 main.go:141] libmachine: Successfully made call to close driver server
	I0916 23:59:17.804364  146126 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 23:59:17.804556  146126 main.go:141] libmachine: Successfully made call to close driver server
	I0916 23:59:17.804584  146126 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 23:59:17.804600  146126 main.go:141] libmachine: (addons-772113) DBG | Closing plugin on server side
	I0916 23:59:17.806417  146126 main.go:141] libmachine: (addons-772113) DBG | Closing plugin on server side
	I0916 23:59:17.806428  146126 main.go:141] libmachine: Successfully made call to close driver server
	I0916 23:59:17.806441  146126 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 23:59:17.806455  146126 main.go:141] libmachine: Making call to close driver server
	I0916 23:59:17.806462  146126 main.go:141] libmachine: (addons-772113) Calling .Close
	I0916 23:59:17.806732  146126 main.go:141] libmachine: Successfully made call to close driver server
	I0916 23:59:17.806751  146126 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 23:59:17.806762  146126 main.go:141] libmachine: (addons-772113) DBG | Closing plugin on server side
	I0916 23:59:18.284013  146126 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0916 23:59:18.284066  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHHostname
	I0916 23:59:18.288207  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:59:18.288816  146126 main.go:141] libmachine: (addons-772113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:9c:db", ip: ""} in network mk-addons-772113: {Iface:virbr2 ExpiryTime:2025-09-17 00:58:40 +0000 UTC Type:0 Mac:52:54:00:1a:9c:db Iaid: IPaddr:192.168.50.205 Prefix:24 Hostname:addons-772113 Clientid:01:52:54:00:1a:9c:db}
	I0916 23:59:18.288882  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined IP address 192.168.50.205 and MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:59:18.289170  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHPort
	I0916 23:59:18.289442  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHKeyPath
	I0916 23:59:18.289631  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHUsername
	I0916 23:59:18.289771  146126 sshutil.go:53] new ssh client: &{IP:192.168.50.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21550-141589/.minikube/machines/addons-772113/id_rsa Username:docker}
	I0916 23:59:18.597685  146126 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.508654705s)
	I0916 23:59:18.597738  146126 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.10610768s)
	I0916 23:59:18.597752  146126 main.go:141] libmachine: Making call to close driver server
	I0916 23:59:18.597770  146126 main.go:141] libmachine: (addons-772113) Calling .Close
	I0916 23:59:18.597780  146126 main.go:141] libmachine: Making call to close driver server
	I0916 23:59:18.597793  146126 main.go:141] libmachine: (addons-772113) Calling .Close
	I0916 23:59:18.598118  146126 main.go:141] libmachine: Successfully made call to close driver server
	I0916 23:59:18.598138  146126 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 23:59:18.598147  146126 main.go:141] libmachine: Making call to close driver server
	I0916 23:59:18.598154  146126 main.go:141] libmachine: (addons-772113) Calling .Close
	I0916 23:59:18.598190  146126 main.go:141] libmachine: Successfully made call to close driver server
	I0916 23:59:18.598210  146126 main.go:141] libmachine: (addons-772113) DBG | Closing plugin on server side
	I0916 23:59:18.598230  146126 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 23:59:18.598286  146126 main.go:141] libmachine: Making call to close driver server
	I0916 23:59:18.598301  146126 main.go:141] libmachine: (addons-772113) Calling .Close
	I0916 23:59:18.598436  146126 main.go:141] libmachine: (addons-772113) DBG | Closing plugin on server side
	I0916 23:59:18.598472  146126 main.go:141] libmachine: Successfully made call to close driver server
	I0916 23:59:18.598479  146126 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 23:59:18.598600  146126 main.go:141] libmachine: Successfully made call to close driver server
	I0916 23:59:18.598615  146126 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 23:59:18.598627  146126 addons.go:479] Verifying addon registry=true in "addons-772113"
	I0916 23:59:18.601130  146126 out.go:179] * Verifying registry addon...
	I0916 23:59:18.603160  146126 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0916 23:59:18.627776  146126 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0916 23:59:18.627800  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:18.658093  146126 main.go:141] libmachine: Making call to close driver server
	I0916 23:59:18.658118  146126 main.go:141] libmachine: (addons-772113) Calling .Close
	I0916 23:59:18.658452  146126 main.go:141] libmachine: (addons-772113) DBG | Closing plugin on server side
	I0916 23:59:18.658489  146126 main.go:141] libmachine: Successfully made call to close driver server
	I0916 23:59:18.658509  146126 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 23:59:19.053986  146126 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0916 23:59:19.120546  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:19.427774  146126 addons.go:238] Setting addon gcp-auth=true in "addons-772113"
	I0916 23:59:19.427848  146126 host.go:66] Checking if "addons-772113" exists ...
	I0916 23:59:19.428226  146126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 23:59:19.428280  146126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 23:59:19.443219  146126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45897
	I0916 23:59:19.443735  146126 main.go:141] libmachine: () Calling .GetVersion
	I0916 23:59:19.444449  146126 main.go:141] libmachine: Using API Version  1
	I0916 23:59:19.444486  146126 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 23:59:19.444922  146126 main.go:141] libmachine: () Calling .GetMachineName
	I0916 23:59:19.445608  146126 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 23:59:19.445646  146126 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 23:59:19.460604  146126 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44149
	I0916 23:59:19.461211  146126 main.go:141] libmachine: () Calling .GetVersion
	I0916 23:59:19.461691  146126 main.go:141] libmachine: Using API Version  1
	I0916 23:59:19.461720  146126 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 23:59:19.462172  146126 main.go:141] libmachine: () Calling .GetMachineName
	I0916 23:59:19.462424  146126 main.go:141] libmachine: (addons-772113) Calling .GetState
	I0916 23:59:19.464358  146126 main.go:141] libmachine: (addons-772113) Calling .DriverName
	I0916 23:59:19.464618  146126 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0916 23:59:19.464652  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHHostname
	I0916 23:59:19.467989  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:59:19.468422  146126 main.go:141] libmachine: (addons-772113) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:9c:db", ip: ""} in network mk-addons-772113: {Iface:virbr2 ExpiryTime:2025-09-17 00:58:40 +0000 UTC Type:0 Mac:52:54:00:1a:9c:db Iaid: IPaddr:192.168.50.205 Prefix:24 Hostname:addons-772113 Clientid:01:52:54:00:1a:9c:db}
	I0916 23:59:19.468451  146126 main.go:141] libmachine: (addons-772113) DBG | domain addons-772113 has defined IP address 192.168.50.205 and MAC address 52:54:00:1a:9c:db in network mk-addons-772113
	I0916 23:59:19.468610  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHPort
	I0916 23:59:19.468817  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHKeyPath
	I0916 23:59:19.469017  146126 main.go:141] libmachine: (addons-772113) Calling .GetSSHUsername
	I0916 23:59:19.469237  146126 sshutil.go:53] new ssh client: &{IP:192.168.50.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21550-141589/.minikube/machines/addons-772113/id_rsa Username:docker}
	I0916 23:59:19.618203  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:20.123980  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:20.660923  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:21.205128  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:21.479105  146126 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.764924791s)
	I0916 23:59:21.479171  146126 main.go:141] libmachine: Making call to close driver server
	I0916 23:59:21.479183  146126 main.go:141] libmachine: (addons-772113) Calling .Close
	I0916 23:59:21.479198  146126 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (8.443038274s)
	W0916 23:59:21.479231  146126 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:59:21.479287  146126 retry.go:31] will retry after 287.184376ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:59:21.479326  146126 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.959443013s)
	I0916 23:59:21.479351  146126 main.go:141] libmachine: Making call to close driver server
	I0916 23:59:21.479361  146126 main.go:141] libmachine: (addons-772113) Calling .Close
	I0916 23:59:21.479423  146126 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.305523582s)
	I0916 23:59:21.479504  146126 main.go:141] libmachine: Making call to close driver server
	I0916 23:59:21.479527  146126 main.go:141] libmachine: (addons-772113) Calling .Close
	I0916 23:59:21.479537  146126 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.965337431s)
	I0916 23:59:21.479585  146126 main.go:141] libmachine: Successfully made call to close driver server
	I0916 23:59:21.479602  146126 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 23:59:21.479610  146126 main.go:141] libmachine: Making call to close driver server
	I0916 23:59:21.479616  146126 main.go:141] libmachine: Successfully made call to close driver server
	W0916 23:59:21.479586  146126 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0916 23:59:21.479625  146126 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 23:59:21.479634  146126 main.go:141] libmachine: Making call to close driver server
	I0916 23:59:21.479633  146126 retry.go:31] will retry after 288.604839ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0916 23:59:21.479640  146126 main.go:141] libmachine: (addons-772113) Calling .Close
	I0916 23:59:21.479647  146126 main.go:141] libmachine: (addons-772113) DBG | Closing plugin on server side
	I0916 23:59:21.479617  146126 main.go:141] libmachine: (addons-772113) Calling .Close
	I0916 23:59:21.479702  146126 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (6.37201723s)
	I0916 23:59:21.479724  146126 api_server.go:72] duration metric: took 10.78029188s to wait for apiserver process to appear ...
	I0916 23:59:21.479732  146126 api_server.go:88] waiting for apiserver healthz status ...
	I0916 23:59:21.479750  146126 api_server.go:253] Checking apiserver healthz at https://192.168.50.205:8443/healthz ...
	I0916 23:59:21.479960  146126 main.go:141] libmachine: (addons-772113) DBG | Closing plugin on server side
	I0916 23:59:21.479999  146126 main.go:141] libmachine: Successfully made call to close driver server
	I0916 23:59:21.480005  146126 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 23:59:21.480012  146126 main.go:141] libmachine: Making call to close driver server
	I0916 23:59:21.480019  146126 main.go:141] libmachine: (addons-772113) Calling .Close
	I0916 23:59:21.480100  146126 main.go:141] libmachine: Successfully made call to close driver server
	I0916 23:59:21.480109  146126 main.go:141] libmachine: (addons-772113) DBG | Closing plugin on server side
	I0916 23:59:21.480112  146126 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 23:59:21.480126  146126 addons.go:479] Verifying addon metrics-server=true in "addons-772113"
	I0916 23:59:21.480137  146126 main.go:141] libmachine: Successfully made call to close driver server
	I0916 23:59:21.480070  146126 main.go:141] libmachine: (addons-772113) DBG | Closing plugin on server side
	I0916 23:59:21.480143  146126 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 23:59:21.480155  146126 addons.go:479] Verifying addon ingress=true in "addons-772113"
	I0916 23:59:21.480364  146126 main.go:141] libmachine: Successfully made call to close driver server
	I0916 23:59:21.480376  146126 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 23:59:21.483464  146126 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-772113 service yakd-dashboard -n yakd-dashboard
	
	I0916 23:59:21.483501  146126 out.go:179] * Verifying ingress addon...
	I0916 23:59:21.485652  146126 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0916 23:59:21.494347  146126 api_server.go:279] https://192.168.50.205:8443/healthz returned 200:
	ok
	I0916 23:59:21.507200  146126 api_server.go:141] control plane version: v1.34.0
	I0916 23:59:21.507238  146126 api_server.go:131] duration metric: took 27.498934ms to wait for apiserver health ...
	I0916 23:59:21.507249  146126 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 23:59:21.525607  146126 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0916 23:59:21.525635  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:59:21.534739  146126 system_pods.go:59] 17 kube-system pods found
	I0916 23:59:21.534788  146126 system_pods.go:61] "amd-gpu-device-plugin-7jbw2" [d933c91f-0ea0-4c08-b2b3-101f533f2b2e] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0916 23:59:21.534797  146126 system_pods.go:61] "coredns-66bc5c9577-fdh2c" [74ded845-73df-4942-848f-8820953008f6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0916 23:59:21.534806  146126 system_pods.go:61] "coredns-66bc5c9577-qtfnh" [0687fc65-0dbf-40c5-b677-213524d980b9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0916 23:59:21.534814  146126 system_pods.go:61] "etcd-addons-772113" [01344ac9-0890-42a2-a206-3fd2cb643535] Running
	I0916 23:59:21.534831  146126 system_pods.go:61] "kube-apiserver-addons-772113" [518a524b-33e2-475c-8965-1877a07bd423] Running
	I0916 23:59:21.534836  146126 system_pods.go:61] "kube-controller-manager-addons-772113" [437d6587-e05d-487f-9728-8570a16c70ee] Running
	I0916 23:59:21.534847  146126 system_pods.go:61] "kube-ingress-dns-minikube" [5eb561e9-de5c-434a-adbd-c236698880bd] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0916 23:59:21.534872  146126 system_pods.go:61] "kube-proxy-2kklh" [4ccfffa7-61ec-4054-939e-7f4697c59aba] Running
	I0916 23:59:21.534878  146126 system_pods.go:61] "kube-scheduler-addons-772113" [adf802f7-f2b1-4159-a8eb-7eacd5376606] Running
	I0916 23:59:21.534885  146126 system_pods.go:61] "metrics-server-85b7d694d7-9q4s4" [a22821ae-c2fa-4dc6-8854-949d14a6c5bd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 23:59:21.534895  146126 system_pods.go:61] "nvidia-device-plugin-daemonset-gn4ld" [f1ded348-a976-4f31-bdc9-c829d0ef1245] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0916 23:59:21.534913  146126 system_pods.go:61] "registry-66898fdd98-gpg82" [fa17d4ca-3961-45bd-80b1-36bb60e50186] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0916 23:59:21.534920  146126 system_pods.go:61] "registry-creds-764b6fb674-mn98j" [c716d7b2-1a50-463d-bff9-cdd1e49227b4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0916 23:59:21.534925  146126 system_pods.go:61] "registry-proxy-69jw9" [9c7b1cd3-d6e3-4846-9991-541d66666aff] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0916 23:59:21.534930  146126 system_pods.go:61] "snapshot-controller-7d9fbc56b8-d6fpn" [a2c34218-2f17-47b0-b87c-87eaecc93e3a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 23:59:21.534935  146126 system_pods.go:61] "snapshot-controller-7d9fbc56b8-zsqnk" [d87d4cd6-9877-4300-a4ac-b83db3076b6c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 23:59:21.534952  146126 system_pods.go:61] "storage-provisioner" [f5afd1bf-1409-4482-8fc7-ce0c3fbb8435] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0916 23:59:21.534962  146126 system_pods.go:74] duration metric: took 27.700565ms to wait for pod list to return data ...
	I0916 23:59:21.534974  146126 default_sa.go:34] waiting for default service account to be created ...
	I0916 23:59:21.552102  146126 default_sa.go:45] found service account: "default"
	I0916 23:59:21.552139  146126 default_sa.go:55] duration metric: took 17.149644ms for default service account to be created ...
	I0916 23:59:21.552150  146126 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 23:59:21.565321  146126 system_pods.go:86] 17 kube-system pods found
	I0916 23:59:21.565354  146126 system_pods.go:89] "amd-gpu-device-plugin-7jbw2" [d933c91f-0ea0-4c08-b2b3-101f533f2b2e] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0916 23:59:21.565361  146126 system_pods.go:89] "coredns-66bc5c9577-fdh2c" [74ded845-73df-4942-848f-8820953008f6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0916 23:59:21.565378  146126 system_pods.go:89] "coredns-66bc5c9577-qtfnh" [0687fc65-0dbf-40c5-b677-213524d980b9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0916 23:59:21.565384  146126 system_pods.go:89] "etcd-addons-772113" [01344ac9-0890-42a2-a206-3fd2cb643535] Running
	I0916 23:59:21.565389  146126 system_pods.go:89] "kube-apiserver-addons-772113" [518a524b-33e2-475c-8965-1877a07bd423] Running
	I0916 23:59:21.565396  146126 system_pods.go:89] "kube-controller-manager-addons-772113" [437d6587-e05d-487f-9728-8570a16c70ee] Running
	I0916 23:59:21.565402  146126 system_pods.go:89] "kube-ingress-dns-minikube" [5eb561e9-de5c-434a-adbd-c236698880bd] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0916 23:59:21.565407  146126 system_pods.go:89] "kube-proxy-2kklh" [4ccfffa7-61ec-4054-939e-7f4697c59aba] Running
	I0916 23:59:21.565417  146126 system_pods.go:89] "kube-scheduler-addons-772113" [adf802f7-f2b1-4159-a8eb-7eacd5376606] Running
	I0916 23:59:21.565424  146126 system_pods.go:89] "metrics-server-85b7d694d7-9q4s4" [a22821ae-c2fa-4dc6-8854-949d14a6c5bd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 23:59:21.565430  146126 system_pods.go:89] "nvidia-device-plugin-daemonset-gn4ld" [f1ded348-a976-4f31-bdc9-c829d0ef1245] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0916 23:59:21.565436  146126 system_pods.go:89] "registry-66898fdd98-gpg82" [fa17d4ca-3961-45bd-80b1-36bb60e50186] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0916 23:59:21.565441  146126 system_pods.go:89] "registry-creds-764b6fb674-mn98j" [c716d7b2-1a50-463d-bff9-cdd1e49227b4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0916 23:59:21.565458  146126 system_pods.go:89] "registry-proxy-69jw9" [9c7b1cd3-d6e3-4846-9991-541d66666aff] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0916 23:59:21.565466  146126 system_pods.go:89] "snapshot-controller-7d9fbc56b8-d6fpn" [a2c34218-2f17-47b0-b87c-87eaecc93e3a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 23:59:21.565472  146126 system_pods.go:89] "snapshot-controller-7d9fbc56b8-zsqnk" [d87d4cd6-9877-4300-a4ac-b83db3076b6c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 23:59:21.565481  146126 system_pods.go:89] "storage-provisioner" [f5afd1bf-1409-4482-8fc7-ce0c3fbb8435] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0916 23:59:21.565497  146126 system_pods.go:126] duration metric: took 13.338788ms to wait for k8s-apps to be running ...
	I0916 23:59:21.565512  146126 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 23:59:21.565584  146126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 23:59:21.626094  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:21.766876  146126 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:59:21.769058  146126 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 23:59:22.000990  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:59:22.118684  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:22.508046  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:59:22.646698  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:22.655645  146126 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.813276339s)
	I0916 23:59:22.655693  146126 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.191044809s)
	I0916 23:59:22.655709  146126 main.go:141] libmachine: Making call to close driver server
	I0916 23:59:22.655730  146126 main.go:141] libmachine: (addons-772113) Calling .Close
	I0916 23:59:22.655728  146126 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.090115024s)
	I0916 23:59:22.655793  146126 system_svc.go:56] duration metric: took 1.090266623s WaitForService to wait for kubelet
	I0916 23:59:22.655895  146126 kubeadm.go:578] duration metric: took 11.956450097s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 23:59:22.655928  146126 node_conditions.go:102] verifying NodePressure condition ...
	I0916 23:59:22.656072  146126 main.go:141] libmachine: (addons-772113) DBG | Closing plugin on server side
	I0916 23:59:22.656089  146126 main.go:141] libmachine: Successfully made call to close driver server
	I0916 23:59:22.656106  146126 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 23:59:22.656117  146126 main.go:141] libmachine: Making call to close driver server
	I0916 23:59:22.656126  146126 main.go:141] libmachine: (addons-772113) Calling .Close
	I0916 23:59:22.656382  146126 main.go:141] libmachine: Successfully made call to close driver server
	I0916 23:59:22.656397  146126 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 23:59:22.656414  146126 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-772113"
	I0916 23:59:22.657328  146126 out.go:179] * Verifying csi-hostpath-driver addon...
	I0916 23:59:22.657335  146126 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0916 23:59:22.658735  146126 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0916 23:59:22.659358  146126 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0916 23:59:22.660005  146126 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0916 23:59:22.660027  146126 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0916 23:59:22.693432  146126 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0916 23:59:22.693456  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:59:22.700570  146126 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0916 23:59:22.700600  146126 node_conditions.go:123] node cpu capacity is 2
	I0916 23:59:22.700613  146126 node_conditions.go:105] duration metric: took 44.679756ms to run NodePressure ...
	I0916 23:59:22.700627  146126 start.go:241] waiting for startup goroutines ...
	I0916 23:59:22.849777  146126 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0916 23:59:22.849823  146126 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0916 23:59:22.996005  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:59:23.067185  146126 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 23:59:23.067212  146126 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0916 23:59:23.116494  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:23.167749  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:59:23.244750  146126 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 23:59:23.494747  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:59:23.610722  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:23.665756  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:59:23.993874  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:59:24.108663  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:24.169697  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:59:24.496604  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:59:24.609779  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:24.670367  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:59:24.998883  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:59:25.161484  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:25.222875  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:59:25.446381  146126 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.679454851s)
	W0916 23:59:25.446447  146126 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:59:25.446475  146126 retry.go:31] will retry after 512.712766ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:59:25.446518  146126 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.677382841s)
	I0916 23:59:25.446566  146126 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.201782463s)
	I0916 23:59:25.446572  146126 main.go:141] libmachine: Making call to close driver server
	I0916 23:59:25.446590  146126 main.go:141] libmachine: (addons-772113) Calling .Close
	I0916 23:59:25.446596  146126 main.go:141] libmachine: Making call to close driver server
	I0916 23:59:25.446605  146126 main.go:141] libmachine: (addons-772113) Calling .Close
	I0916 23:59:25.446951  146126 main.go:141] libmachine: Successfully made call to close driver server
	I0916 23:59:25.446977  146126 main.go:141] libmachine: (addons-772113) DBG | Closing plugin on server side
	I0916 23:59:25.446986  146126 main.go:141] libmachine: (addons-772113) DBG | Closing plugin on server side
	I0916 23:59:25.446999  146126 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 23:59:25.447012  146126 main.go:141] libmachine: Making call to close driver server
	I0916 23:59:25.447024  146126 main.go:141] libmachine: (addons-772113) Calling .Close
	I0916 23:59:25.447230  146126 main.go:141] libmachine: (addons-772113) DBG | Closing plugin on server side
	I0916 23:59:25.447276  146126 main.go:141] libmachine: Successfully made call to close driver server
	I0916 23:59:25.447289  146126 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 23:59:25.447473  146126 main.go:141] libmachine: Successfully made call to close driver server
	I0916 23:59:25.447503  146126 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 23:59:25.447518  146126 main.go:141] libmachine: Making call to close driver server
	I0916 23:59:25.447532  146126 main.go:141] libmachine: (addons-772113) Calling .Close
	I0916 23:59:25.447827  146126 main.go:141] libmachine: Successfully made call to close driver server
	I0916 23:59:25.447841  146126 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 23:59:25.448955  146126 addons.go:479] Verifying addon gcp-auth=true in "addons-772113"
	I0916 23:59:25.450988  146126 out.go:179] * Verifying gcp-auth addon...
	I0916 23:59:25.453575  146126 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0916 23:59:25.465662  146126 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0916 23:59:25.465692  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:59:25.496173  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:59:25.610841  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:25.664844  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:59:25.958611  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:59:25.959619  146126 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:59:25.993953  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:59:26.110499  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:26.165669  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:59:26.458178  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:59:26.490105  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:59:26.609209  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:26.709639  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:59:26.958953  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:59:27.060053  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:59:27.159445  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:27.165992  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:59:27.347476  146126 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.387811008s)
	W0916 23:59:27.347525  146126 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:59:27.347553  146126 retry.go:31] will retry after 433.55255ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:59:27.460536  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:59:27.495507  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:59:27.607047  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:27.665454  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:59:27.781657  146126 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:59:27.962731  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:59:27.990454  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:59:28.109661  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:28.164354  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:59:28.460648  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:59:28.492778  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:59:28.609052  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:28.665664  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:59:28.874097  146126 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.092398525s)
	W0916 23:59:28.874156  146126 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:59:28.874183  146126 retry.go:31] will retry after 863.295346ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:59:28.957136  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:59:28.990732  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:59:29.108941  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:29.164382  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:59:29.457546  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:59:29.491396  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:59:29.606995  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:29.663622  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:59:29.737656  146126 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:59:29.957917  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:59:29.991370  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:59:30.106033  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:30.164906  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:59:30.460113  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:59:30.492633  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:59:30.606996  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:30.664022  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:59:30.827391  146126 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.089687721s)
	W0916 23:59:30.827454  146126 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:59:30.827479  146126 retry.go:31] will retry after 1.249319746s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:59:30.958531  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:59:30.992164  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:59:31.107471  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:31.164603  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:59:31.457529  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:59:31.491446  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:59:31.609477  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:31.666569  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:59:31.957722  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:59:31.989207  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:59:32.077319  146126 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:59:32.110926  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:32.167167  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:59:32.759102  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:32.759155  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:59:32.759279  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:59:32.762818  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:59:32.957349  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:59:32.991437  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:59:33.109298  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:33.166048  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:59:33.298427  146126 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.221056163s)
	W0916 23:59:33.298506  146126 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:59:33.298535  146126 retry.go:31] will retry after 1.10666362s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:59:33.460085  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:59:33.489364  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:59:33.609032  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:33.664479  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:59:33.959554  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:59:33.992796  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:59:34.110792  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:34.163697  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:59:34.405549  146126 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:59:34.457821  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:59:34.490626  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:59:34.609620  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:34.668028  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:59:34.959532  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:59:34.992674  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:59:35.107889  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:35.164045  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0916 23:59:35.318299  146126 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:59:35.318352  146126 retry.go:31] will retry after 1.861435812s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:59:35.460040  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:59:35.491377  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:59:35.606704  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:35.663087  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:59:35.958567  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:59:35.990653  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:59:36.107639  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:36.168423  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:59:36.723150  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:59:36.725449  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:59:36.726067  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:36.726336  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:59:37.180625  146126 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:59:37.184181  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:37.184850  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:59:37.185387  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:59:37.185985  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:59:37.461795  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:59:37.492990  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:59:37.609013  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:37.666281  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:59:37.958115  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:59:37.991040  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:59:38.108564  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:38.165871  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:59:38.184892  146126 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.004219392s)
	W0916 23:59:38.184937  146126 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:59:38.184958  146126 retry.go:31] will retry after 5.845626658s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:59:38.458596  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:59:38.492781  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:59:38.607522  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:38.665570  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:59:38.960644  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:59:38.992973  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:59:39.107137  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:39.165602  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:59:39.457315  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:59:39.490050  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:59:39.607654  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:39.663318  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:59:39.957441  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:59:39.989588  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:59:40.107811  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:40.166564  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:59:40.456824  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:59:40.502590  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:59:40.606508  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:40.662667  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:59:40.959118  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:59:40.991819  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:59:41.107546  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:41.163283  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:59:41.459077  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:59:41.559888  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:59:41.607333  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:41.663918  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:59:41.958579  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:59:41.990455  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:59:42.110644  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:42.173370  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:59:42.458597  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:59:42.492234  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:59:42.608670  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:42.667756  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:59:42.957324  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:59:42.990101  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:59:43.110472  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:43.166297  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:59:43.458319  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:59:43.493637  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:59:43.610199  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:43.668913  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:59:43.959682  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:59:43.990035  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:59:44.031068  146126 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:59:44.110582  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:44.165296  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:59:44.461440  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:59:44.493371  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:59:44.607463  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:44.665924  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:59:44.966341  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:59:44.992070  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:59:45.109123  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:45.163119  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:59:45.167269  146126 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.136154366s)
	W0916 23:59:45.167313  146126 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:59:45.167337  146126 retry.go:31] will retry after 3.391306671s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:59:45.461489  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:59:45.489888  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:59:45.608568  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:45.664188  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:59:45.958105  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:59:45.989750  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:59:46.107528  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:46.162810  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:59:46.457880  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:59:46.489334  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:59:46.607171  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:46.662752  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:59:46.958615  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:59:46.989586  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:59:47.108524  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:47.163178  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:59:47.460313  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:59:47.491760  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:59:47.609804  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:47.663450  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:59:47.960984  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:59:47.989554  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:59:48.108695  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:48.167582  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:59:48.457330  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:59:48.490051  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:59:48.559124  146126 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:59:48.609399  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:48.709504  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:59:48.960522  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:59:48.992786  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:59:49.107204  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:49.164822  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0916 23:59:49.349265  146126 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:59:49.349326  146126 retry.go:31] will retry after 8.539384151s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:59:49.458281  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:59:49.490434  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:59:49.606751  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:49.663755  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:59:49.959717  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:59:49.993515  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:59:50.107770  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:50.163634  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:59:50.459436  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:59:50.492140  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:59:50.609657  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:50.668108  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:59:50.959629  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:59:50.994690  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:59:51.109109  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:51.164158  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:59:51.478095  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:59:51.490929  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:59:51.606548  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:51.665776  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:59:51.957571  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:59:51.990627  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:59:52.108725  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:52.167702  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:59:52.456969  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:59:52.491482  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:59:52.608448  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:52.662955  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:59:52.959705  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:59:52.990399  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:59:53.108941  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:53.164637  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:59:53.480938  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:59:53.493257  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:59:53.668975  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:53.671550  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:59:53.960991  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:59:53.990780  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:59:54.107009  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:54.164067  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:59:54.462476  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:59:54.686094  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:54.686227  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:59:54.686425  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:59:54.962798  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:59:54.992402  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:59:55.107818  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:55.168428  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:59:55.467585  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:59:55.493499  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:59:55.609618  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:55.664396  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:59:55.961392  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:59:55.990433  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:59:56.108567  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:56.164532  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:59:56.460812  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:59:56.499133  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:59:56.607828  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:56.667428  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:59:56.961775  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:59:56.990586  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:59:57.108576  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:57.165220  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:59:57.460045  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:59:57.490727  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:59:57.611785  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:57.685548  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:59:57.889613  146126 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0916 23:59:57.958201  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:59:57.991709  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:59:58.111804  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:58.167154  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:59:58.461073  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:59:58.493666  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:59:58.609225  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:58.665794  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:59:58.960623  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:59:58.990642  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:59:59.108394  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:59.165042  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:59:59.432142  146126 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.542479138s)
	W0916 23:59:59.432204  146126 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:59:59.432232  146126 retry.go:31] will retry after 20.06245935s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 23:59:59.458342  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:59:59.493638  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 23:59:59.608204  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 23:59:59.668477  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 23:59:59.960301  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 23:59:59.997788  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:00.108418  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:00:00.166240  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:00.459659  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:00.492146  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:00.608072  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:00:00.664068  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:00.958663  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:00.991871  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:01.113991  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:00:01.213478  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:01.458219  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:01.490909  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:01.608311  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:00:01.668732  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:01.960050  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:01.992692  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:02.108732  146126 kapi.go:107] duration metric: took 43.505564736s to wait for kubernetes.io/minikube-addons=registry ...
	I0917 00:00:02.164498  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:02.459904  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:02.490505  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:02.669081  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:03.105051  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:03.105339  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:03.167591  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:03.459041  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:03.490169  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:03.663588  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:03.959491  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:03.991012  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:04.163628  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:04.458321  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:04.490304  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:04.664198  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:04.963049  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:04.995025  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:05.171132  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:05.458681  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:05.490887  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:05.663475  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:05.959488  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:05.991229  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:06.164338  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:06.458648  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:06.490291  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:06.665181  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:06.961114  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:06.990827  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:07.167102  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:07.458469  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:07.490268  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:07.665210  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:07.959728  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:08.061448  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:08.163456  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:08.458595  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:08.490702  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:08.663882  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:08.957836  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:08.989846  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:09.164165  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:09.457595  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:09.491427  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:09.663706  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:09.957736  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:09.989971  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:10.164416  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:10.458939  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:10.489443  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:10.663122  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:10.957725  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:10.990746  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:11.163959  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:11.457547  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:11.490382  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:11.665329  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:11.958230  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:11.992320  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:12.165257  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:12.458099  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:12.489951  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:12.663773  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:12.957524  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:12.990550  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:13.163487  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:13.458399  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:13.490079  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:13.664466  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:13.958197  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:13.990177  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:14.164273  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:14.459015  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:14.489846  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:14.663758  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:14.958034  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:14.989880  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:15.164336  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:15.459465  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:15.490269  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:15.664563  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:15.958312  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:15.990749  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:16.163680  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:16.457174  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:16.489346  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:16.663655  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:16.958215  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:16.989849  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:17.163518  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:17.458268  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:17.489951  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:17.664653  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:17.958392  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:17.991404  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:18.165209  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:18.458396  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:18.490675  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:18.663454  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:18.959540  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:18.991362  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:19.164274  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:19.460324  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:19.491567  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:19.495599  146126 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0917 00:00:19.666194  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:19.960088  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:19.991846  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:20.164766  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0917 00:00:20.434953  146126 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0917 00:00:20.435002  146126 retry.go:31] will retry after 21.470475825s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0917 00:00:20.457747  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:20.491504  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:20.665446  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:20.962195  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:20.989756  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:21.163976  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:21.458288  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:21.490503  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:21.664197  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:21.958163  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:21.990050  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:22.165471  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:22.459450  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:22.489878  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:22.663842  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:22.957295  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:22.989720  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:23.164026  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:23.458591  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:23.491772  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:23.663611  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:23.959590  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:23.990368  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:24.163593  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:24.458154  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:24.489938  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:24.663798  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:24.958601  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:24.992341  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:25.163891  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:25.458247  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:25.492822  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:25.664014  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:25.958412  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:25.990551  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:26.164718  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:26.458966  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:26.489166  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:26.663948  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:26.958661  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:26.989333  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:27.163743  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:27.458193  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:27.489990  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:27.663654  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:27.958448  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:27.990384  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:28.163966  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:28.457483  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:28.489944  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:28.663491  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:28.958414  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:28.990309  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:29.165246  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:29.459390  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:29.490297  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:29.663728  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:29.958923  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:29.998258  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:30.165178  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:30.458313  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:30.490740  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:30.664010  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:30.957989  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:30.989696  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:31.165419  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:31.458676  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:31.490887  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:31.663155  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:31.958376  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:31.990138  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:32.166036  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:32.458383  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:32.492097  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:32.664214  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:32.958065  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:32.991018  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:33.163440  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:33.458445  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:33.489750  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:33.663620  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:33.960002  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:33.989593  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:34.167581  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:34.463092  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:34.494127  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:34.666685  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:35.056620  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:35.058623  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:35.165563  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:35.458432  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:35.490161  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:35.722819  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:35.959934  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:35.991673  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:36.167183  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:36.458798  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:36.489656  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:36.671077  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:36.958954  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:36.989927  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:37.164030  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:37.457480  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:37.490033  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:37.663626  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:37.958228  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:37.991880  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:38.167217  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:38.459299  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:38.490622  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:38.666317  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:38.963702  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:38.994492  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:39.165946  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:39.461193  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:39.491442  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:39.663154  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:39.957829  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:39.989679  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:40.165680  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:40.458475  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:40.490815  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:40.663841  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:40.958477  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:40.991720  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:41.164730  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:41.459110  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:41.492961  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:41.906737  146126 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0917 00:00:41.990103  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:41.990232  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:41.990420  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:42.166538  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:42.463785  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:42.495099  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:42.669105  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:42.959690  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:42.992768  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:43.089965  146126 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.183173369s)
	W0917 00:00:43.090034  146126 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0917 00:00:43.090109  146126 main.go:141] libmachine: Making call to close driver server
	I0917 00:00:43.090132  146126 main.go:141] libmachine: (addons-772113) Calling .Close
	I0917 00:00:43.090405  146126 main.go:141] libmachine: Successfully made call to close driver server
	I0917 00:00:43.090424  146126 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 00:00:43.090435  146126 main.go:141] libmachine: Making call to close driver server
	I0917 00:00:43.090443  146126 main.go:141] libmachine: (addons-772113) Calling .Close
	I0917 00:00:43.090695  146126 main.go:141] libmachine: (addons-772113) DBG | Closing plugin on server side
	I0917 00:00:43.090785  146126 main.go:141] libmachine: Successfully made call to close driver server
	I0917 00:00:43.090803  146126 main.go:141] libmachine: Making call to close connection to plugin binary
	W0917 00:00:43.090968  146126 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0917 00:00:43.175517  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:43.462352  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:43.495297  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:43.667773  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:43.957363  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:43.991038  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:44.164302  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:44.459733  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:44.497039  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:44.665106  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:44.960135  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:45.061453  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:45.164782  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:45.462416  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:45.494979  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:45.666923  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:45.960470  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:45.990845  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:46.164259  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:46.458330  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:46.490262  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:46.664505  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:46.957961  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:46.989217  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:47.165299  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:47.459205  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:47.494309  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:47.663902  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:47.958031  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:47.990112  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:48.163750  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:48.479530  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:48.499133  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:48.664332  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:48.958914  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:48.992930  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:49.164472  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:49.464758  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:49.491237  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:49.668012  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:49.963953  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:49.998624  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:50.165293  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:50.473244  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:50.499697  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:50.666886  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:50.959213  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:50.990353  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:51.164708  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:51.459580  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:51.543967  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:51.665174  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:51.958166  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:51.989758  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:52.164644  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:52.457298  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:52.490579  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:52.664132  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:52.958931  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:52.990160  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:53.173825  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:53.460656  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:53.565281  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:53.665013  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:53.957723  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:53.990795  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:54.164221  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:54.460014  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:54.491914  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:54.669129  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:54.959067  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:54.990481  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:55.165152  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:55.462476  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:55.499300  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:55.671442  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:55.962150  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:55.993687  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:56.165340  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:56.462657  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:56.563292  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:56.664635  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:56.974298  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:57.072790  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:57.166782  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:57.459485  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:57.495001  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:57.664486  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:57.958706  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:57.990004  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:58.166439  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:58.465291  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:58.490393  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:58.665538  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:58.960793  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:58.991147  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:59.170215  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:59.459050  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:00:59.489832  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:00:59.664499  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:00:59.960903  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:00.061439  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:00.164423  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:01:00.460652  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:00.490184  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:00.663582  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:01:00.960318  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:00.990959  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:01.164838  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:01:01.457660  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:01.490421  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:01.663429  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:01:01.959737  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:01.991608  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:02.164502  146126 kapi.go:107] duration metric: took 1m39.505139737s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0917 00:01:02.456829  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:02.490155  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:02.959686  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:02.990721  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:03.458099  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:03.490034  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:03.958710  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:03.989559  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:04.457537  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:04.505304  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:04.960154  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:04.990313  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:05.459066  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:05.490481  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:05.964957  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:05.989435  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:06.458529  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:06.490332  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:06.959630  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:06.990457  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:07.457877  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:07.490078  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:07.958171  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:07.990589  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:08.457814  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:08.490320  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:08.960700  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:08.989432  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:09.458042  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:09.490308  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:09.958831  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:09.991107  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:10.458399  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:10.490424  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:10.959254  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:10.991583  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:11.458031  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:11.490577  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:11.959068  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:11.990767  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:12.460545  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:12.491286  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:12.960067  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:12.990879  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:13.458261  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:13.490077  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:13.959572  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:13.991899  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:14.458589  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:14.491427  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:14.958466  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:14.991842  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:15.458362  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:15.491043  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:15.963257  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:15.990502  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:16.458325  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:16.491243  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:16.960292  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:16.989962  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:17.458074  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:17.489689  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:17.957567  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:17.990625  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:18.457103  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:18.489984  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:18.957978  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:18.990278  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:19.459448  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:19.489810  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:19.958128  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:19.990004  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:20.460620  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:20.490700  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:20.959077  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:20.996331  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:21.458347  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:21.490846  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:21.957761  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:21.991223  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:22.457924  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:22.489780  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:22.958561  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:23.059543  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:23.457480  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:23.490275  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:23.957663  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:23.990659  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:24.457150  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:24.490218  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:24.959827  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:24.990297  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:25.458576  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:25.490602  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:25.961133  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:25.990247  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:26.458022  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:26.489947  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:26.958714  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:27.059286  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:27.458700  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:27.490713  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:27.957653  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:27.990517  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:28.457155  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:28.490346  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:28.957370  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:28.989957  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:29.458647  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:29.490358  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:29.959112  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:29.989969  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:30.460222  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:30.491356  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:30.958996  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:30.990809  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:31.457767  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:31.489978  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:31.957918  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:31.989415  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:32.458062  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:32.489503  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:32.957509  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:32.990965  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:33.458480  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:33.490698  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:33.957564  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:33.991218  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:34.457936  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:34.489940  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:34.958929  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:34.990235  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:35.459186  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:35.490332  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:35.961530  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:35.990175  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:36.457619  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:36.490893  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:36.957557  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:36.991603  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:37.460504  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:37.489993  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:37.959595  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:37.991407  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:38.458346  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:38.491417  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:38.958800  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:38.992278  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:39.458798  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:39.494113  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:39.961056  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:39.995324  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:40.459144  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:40.492645  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:40.958777  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:40.994547  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:41.461422  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:41.497234  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:41.965566  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:41.992148  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:42.464090  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:42.493298  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:42.957837  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:42.990830  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:43.458763  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:43.490842  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:43.957704  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:43.993417  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:44.463156  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:44.499329  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:45.020251  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:45.020818  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:45.459837  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:45.492225  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:45.961991  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:45.990055  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:46.460683  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:46.492465  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:46.956918  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:46.989673  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:47.458161  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:47.490459  146126 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:01:47.960768  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:47.990762  146126 kapi.go:107] duration metric: took 2m26.505099749s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0917 00:01:48.461650  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:48.967909  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:49.464478  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:49.957613  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:50.457828  146126 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:01:50.957428  146126 kapi.go:107] duration metric: took 2m25.503851041s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0917 00:01:50.959515  146126 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-772113 cluster.
	I0917 00:01:50.961170  146126 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0917 00:01:50.962593  146126 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0917 00:01:50.964228  146126 out.go:179] * Enabled addons: default-storageclass, nvidia-device-plugin, amd-gpu-device-plugin, ingress-dns, storage-provisioner, registry-creds, cloud-spanner, storage-provisioner-rancher, metrics-server, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0917 00:01:50.965808  146126 addons.go:514] duration metric: took 2m40.266330266s for enable addons: enabled=[default-storageclass nvidia-device-plugin amd-gpu-device-plugin ingress-dns storage-provisioner registry-creds cloud-spanner storage-provisioner-rancher metrics-server yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0917 00:01:50.965899  146126 start.go:246] waiting for cluster config update ...
	I0917 00:01:50.965933  146126 start.go:255] writing updated cluster config ...
	I0917 00:01:50.966270  146126 ssh_runner.go:195] Run: rm -f paused
	I0917 00:01:50.974525  146126 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0917 00:01:50.979599  146126 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-fdh2c" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:01:50.986178  146126 pod_ready.go:94] pod "coredns-66bc5c9577-fdh2c" is "Ready"
	I0917 00:01:50.986221  146126 pod_ready.go:86] duration metric: took 6.585976ms for pod "coredns-66bc5c9577-fdh2c" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:01:50.989335  146126 pod_ready.go:83] waiting for pod "etcd-addons-772113" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:01:50.995766  146126 pod_ready.go:94] pod "etcd-addons-772113" is "Ready"
	I0917 00:01:50.995795  146126 pod_ready.go:86] duration metric: took 6.433228ms for pod "etcd-addons-772113" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:01:50.998988  146126 pod_ready.go:83] waiting for pod "kube-apiserver-addons-772113" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:01:51.005931  146126 pod_ready.go:94] pod "kube-apiserver-addons-772113" is "Ready"
	I0917 00:01:51.005963  146126 pod_ready.go:86] duration metric: took 6.939698ms for pod "kube-apiserver-addons-772113" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:01:51.008441  146126 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-772113" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:01:51.380390  146126 pod_ready.go:94] pod "kube-controller-manager-addons-772113" is "Ready"
	I0917 00:01:51.380427  146126 pod_ready.go:86] duration metric: took 371.958776ms for pod "kube-controller-manager-addons-772113" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:01:51.579893  146126 pod_ready.go:83] waiting for pod "kube-proxy-2kklh" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:01:51.980263  146126 pod_ready.go:94] pod "kube-proxy-2kklh" is "Ready"
	I0917 00:01:51.980292  146126 pod_ready.go:86] duration metric: took 400.372213ms for pod "kube-proxy-2kklh" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:01:52.178747  146126 pod_ready.go:83] waiting for pod "kube-scheduler-addons-772113" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:01:52.579826  146126 pod_ready.go:94] pod "kube-scheduler-addons-772113" is "Ready"
	I0917 00:01:52.579888  146126 pod_ready.go:86] duration metric: took 401.108467ms for pod "kube-scheduler-addons-772113" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:01:52.579908  146126 pod_ready.go:40] duration metric: took 1.605330541s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0917 00:01:52.629528  146126 start.go:617] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0917 00:01:52.631465  146126 out.go:179] * Done! kubectl is now configured to use "addons-772113" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 17 00:05:12 addons-772113 crio[825]: time="2025-09-17 00:05:12.582934398Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1850f62b-a5f9-4b52-8caa-3c105fb64501 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 00:05:12 addons-772113 crio[825]: time="2025-09-17 00:05:12.583037558Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1850f62b-a5f9-4b52-8caa-3c105fb64501 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 00:05:12 addons-772113 crio[825]: time="2025-09-17 00:05:12.583412972Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b3f9817b7afed2279934c5a935afc5f87ea34faa2ba7b48b93a432caf9911c2d,PodSandboxId:3a677e98fdcfe953fc28286c10f3021c97339f8784686ff50a9b3ae6dcab2d15,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9,State:CONTAINER_RUNNING,CreatedAt:1758067370662915960,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 47ae305d-ca3e-4058-a73e-7fbde8abf594,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbf230de62470f581feee9c7d9d13dd58301f7edbddd95bd1926001e9ff099bc,PodSandboxId:00118d4a08fd6c53cba1cefc9db0e96d607dfb8b3d12dd087064f2f43e77d0a2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1758067317191641144,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eac57424-ccea-45eb-a612-1e6f0b0fc281,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e951934e3cba2bdd0d8a5eeb48b8fb05e16c351c1ddcdfa4e14b98c90b05b56,PodSandboxId:cec0f13cc80ff023ec505650f662958b767ceed90f847d5a7377cec6d656769b,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1758067307171257479,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-ctrt2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 141750b1-cfaf-4d96-9960-ad324ef033cf,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:b933ac39ad371319ee2f11983ef66aa20020367282c5ebe97b0b5c09ce3f8946,PodSandboxId:c402290bae9353f64db55ab7bd552a47bcc1202c8978c93fd34cad4743432c23,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Sta
te:CONTAINER_EXITED,CreatedAt:1758067248398063786,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-hk8np,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a38f38a3-4ad2-48ca-86dc-79da71258ad2,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9a8e65cc284050cc15fcdc541d18742738708564313fdbbebc7ff9eafe753e5,PodSandboxId:7b42178063192d41f028f436695f90055e3758b8f54e84e5470c5573404bed22,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d1697145
49c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1758067247283903672,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-9znr2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4475e247-ba5e-44e9-95f3-10fc3ddd1aeb,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46493a2b014479e93202fbe43723b0a7a650eeb9412ac71778599699ae7b0520,PodSandboxId:39ed4283b599cefbcd2d244701c5beaa5c9324873cbd2789404aaf7d44e52661,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:966
0a1727a97702fd80cef66da2e074d17d2e33bd086736d1ebdc7fc6ccd3441,State:CONTAINER_RUNNING,CreatedAt:1758067236651394621,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-2zjwp,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 58263e6e-d425-489e-ad7b-499cdfd090f5,},Annotations:map[string]string{io.kubernetes.container.hash: 2616a42b,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1843391d60b5f978b498c6ac3eb0560b0325552e15889103a1ab8c2f0c41f3be,PodSandboxId:f2aef03b8e4d6ddfe528d5142a8708a2dc1bad931f76e8b98276f03f7d67907e,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c88
0e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1758067197522248325,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eb561e9-de5c-434a-adbd-c236698880bd,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a17e55359515f4ac3317d1efab37f38d7a9b656a9e6821d64c9415b16be7cf72,PodSandboxId:347320fdd98461135fcb6c25b523425e7381e2c8e3f407896ef8cc0c8783b529,Metadata:&ContainerMetadata{Name:amd-gpu
-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1758067181367254003,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-7jbw2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d933c91f-0ea0-4c08-b2b3-101f533f2b2e,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:591afd015554d5680abab1850869083adef6cdfc8e8224dc3cb0701810821383,PodSandboxId:e51a6efc1f19b520becc092616e2b50df2a82094901cb5cd4235e0c29b96a761,
Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1758067160588441992,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5afd1bf-1409-4482-8fc7-ce0c3fbb8435,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02b28e4a4ee7058988fd781f9b4ce4676fa6f5b445f662f9cc921e30427f7cce,PodSandboxId:5f01da8ab6d33f6fed3d1e6e103bc5851468867ef8eac30c23e5a6586fb2fb79,Metadata:&Co
ntainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1758067152653759472,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-fdh2c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74ded845-73df-4942-848f-8820953008f6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e0e44c311f791915b374520487cd97c39e16ab2d64f9b8043b8bc5f4be453ab,PodSandboxId:30d210ffbba2133d5b840ea13b2d710479eb51836e77f5b8a836ed668691d5d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1758067151905890572,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2kklh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ccfffa7-61ec-4054-939e-7f4697c59aba,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b641d341ad0fb32498d4bc38921c1f5fa0118f8ad6c1996c47f8c5ea8695bdf9,PodSandboxId:789a3f23ede662dd3e0345df78aae8aded7644ac21f06311f92380dfe5aabc8c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1758067140046719418,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-772113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72c977716c2a213edf403423c1d994de,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPor
t\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7af430f49a307765efa81a2e58d27dd1884d71b3642027993ea89193e4382485,PodSandboxId:fd20dfacd6d068e1858fe6e2d2723139df82f1de47492d78bcab31877f0dd93e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1758067140030190139,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-772113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de14323d77445415967489e9bc2b9259,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a6c33dde35735e1d231ebb1d5f8661451016dd53aa45aaae37879b785d733d9,PodSandboxId:3a61d9222cf2fbd38126710da3e45346525e4d5afbad6e9cc1b4fe0ce22bcd6b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1758067140024420659,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-772113,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 8e3f3db1c9264cdaf504ecceede9f845,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6019ddb1f64b8ffc13fe7d0699a4cfa5b01d4a05000bd7d8a3ba80cee1c56dc5,PodSandboxId:34a4e8f079f1a8a016c90e81289a54d830b4318632cf5f8aa9afe19c7732e429,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1758067139974349007,Labels:map[string]string{io.kubernetes.container
.name: etcd,io.kubernetes.pod.name: etcd-addons-772113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bdc197ef501906b0a6dbef768d6c654,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1850f62b-a5f9-4b52-8caa-3c105fb64501 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 00:05:12 addons-772113 crio[825]: time="2025-09-17 00:05:12.632676914Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=025a1151-eb4c-407b-8a25-f24571f9f519 name=/runtime.v1.RuntimeService/Version
	Sep 17 00:05:12 addons-772113 crio[825]: time="2025-09-17 00:05:12.632751524Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=025a1151-eb4c-407b-8a25-f24571f9f519 name=/runtime.v1.RuntimeService/Version
	Sep 17 00:05:12 addons-772113 crio[825]: time="2025-09-17 00:05:12.634378775Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=915afbd8-7eec-46c7-8639-de313e7567af name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 00:05:12 addons-772113 crio[825]: time="2025-09-17 00:05:12.635983676Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758067512635951546,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596879,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=915afbd8-7eec-46c7-8639-de313e7567af name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 00:05:12 addons-772113 crio[825]: time="2025-09-17 00:05:12.636833489Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e9a2ce86-d6ec-48b4-8031-25821c26cdec name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 00:05:12 addons-772113 crio[825]: time="2025-09-17 00:05:12.636902164Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e9a2ce86-d6ec-48b4-8031-25821c26cdec name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 00:05:12 addons-772113 crio[825]: time="2025-09-17 00:05:12.637756109Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b3f9817b7afed2279934c5a935afc5f87ea34faa2ba7b48b93a432caf9911c2d,PodSandboxId:3a677e98fdcfe953fc28286c10f3021c97339f8784686ff50a9b3ae6dcab2d15,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9,State:CONTAINER_RUNNING,CreatedAt:1758067370662915960,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 47ae305d-ca3e-4058-a73e-7fbde8abf594,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbf230de62470f581feee9c7d9d13dd58301f7edbddd95bd1926001e9ff099bc,PodSandboxId:00118d4a08fd6c53cba1cefc9db0e96d607dfb8b3d12dd087064f2f43e77d0a2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1758067317191641144,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eac57424-ccea-45eb-a612-1e6f0b0fc281,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e951934e3cba2bdd0d8a5eeb48b8fb05e16c351c1ddcdfa4e14b98c90b05b56,PodSandboxId:cec0f13cc80ff023ec505650f662958b767ceed90f847d5a7377cec6d656769b,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1758067307171257479,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-ctrt2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 141750b1-cfaf-4d96-9960-ad324ef033cf,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:b933ac39ad371319ee2f11983ef66aa20020367282c5ebe97b0b5c09ce3f8946,PodSandboxId:c402290bae9353f64db55ab7bd552a47bcc1202c8978c93fd34cad4743432c23,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Sta
te:CONTAINER_EXITED,CreatedAt:1758067248398063786,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-hk8np,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a38f38a3-4ad2-48ca-86dc-79da71258ad2,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9a8e65cc284050cc15fcdc541d18742738708564313fdbbebc7ff9eafe753e5,PodSandboxId:7b42178063192d41f028f436695f90055e3758b8f54e84e5470c5573404bed22,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d1697145
49c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1758067247283903672,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-9znr2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4475e247-ba5e-44e9-95f3-10fc3ddd1aeb,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46493a2b014479e93202fbe43723b0a7a650eeb9412ac71778599699ae7b0520,PodSandboxId:39ed4283b599cefbcd2d244701c5beaa5c9324873cbd2789404aaf7d44e52661,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:966
0a1727a97702fd80cef66da2e074d17d2e33bd086736d1ebdc7fc6ccd3441,State:CONTAINER_RUNNING,CreatedAt:1758067236651394621,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-2zjwp,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 58263e6e-d425-489e-ad7b-499cdfd090f5,},Annotations:map[string]string{io.kubernetes.container.hash: 2616a42b,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1843391d60b5f978b498c6ac3eb0560b0325552e15889103a1ab8c2f0c41f3be,PodSandboxId:f2aef03b8e4d6ddfe528d5142a8708a2dc1bad931f76e8b98276f03f7d67907e,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c88
0e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1758067197522248325,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eb561e9-de5c-434a-adbd-c236698880bd,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a17e55359515f4ac3317d1efab37f38d7a9b656a9e6821d64c9415b16be7cf72,PodSandboxId:347320fdd98461135fcb6c25b523425e7381e2c8e3f407896ef8cc0c8783b529,Metadata:&ContainerMetadata{Name:amd-gpu
-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1758067181367254003,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-7jbw2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d933c91f-0ea0-4c08-b2b3-101f533f2b2e,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:591afd015554d5680abab1850869083adef6cdfc8e8224dc3cb0701810821383,PodSandboxId:e51a6efc1f19b520becc092616e2b50df2a82094901cb5cd4235e0c29b96a761,
Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1758067160588441992,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5afd1bf-1409-4482-8fc7-ce0c3fbb8435,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02b28e4a4ee7058988fd781f9b4ce4676fa6f5b445f662f9cc921e30427f7cce,PodSandboxId:5f01da8ab6d33f6fed3d1e6e103bc5851468867ef8eac30c23e5a6586fb2fb79,Metadata:&Co
ntainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1758067152653759472,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-fdh2c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74ded845-73df-4942-848f-8820953008f6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e0e44c311f791915b374520487cd97c39e16ab2d64f9b8043b8bc5f4be453ab,PodSandboxId:30d210ffbba2133d5b840ea13b2d710479eb51836e77f5b8a836ed668691d5d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1758067151905890572,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2kklh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ccfffa7-61ec-4054-939e-7f4697c59aba,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b641d341ad0fb32498d4bc38921c1f5fa0118f8ad6c1996c47f8c5ea8695bdf9,PodSandboxId:789a3f23ede662dd3e0345df78aae8aded7644ac21f06311f92380dfe5aabc8c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1758067140046719418,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-772113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72c977716c2a213edf403423c1d994de,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPor
t\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7af430f49a307765efa81a2e58d27dd1884d71b3642027993ea89193e4382485,PodSandboxId:fd20dfacd6d068e1858fe6e2d2723139df82f1de47492d78bcab31877f0dd93e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1758067140030190139,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-772113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de14323d77445415967489e9bc2b9259,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a6c33dde35735e1d231ebb1d5f8661451016dd53aa45aaae37879b785d733d9,PodSandboxId:3a61d9222cf2fbd38126710da3e45346525e4d5afbad6e9cc1b4fe0ce22bcd6b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1758067140024420659,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-772113,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 8e3f3db1c9264cdaf504ecceede9f845,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6019ddb1f64b8ffc13fe7d0699a4cfa5b01d4a05000bd7d8a3ba80cee1c56dc5,PodSandboxId:34a4e8f079f1a8a016c90e81289a54d830b4318632cf5f8aa9afe19c7732e429,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1758067139974349007,Labels:map[string]string{io.kubernetes.container
.name: etcd,io.kubernetes.pod.name: etcd-addons-772113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bdc197ef501906b0a6dbef768d6c654,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e9a2ce86-d6ec-48b4-8031-25821c26cdec name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 00:05:12 addons-772113 crio[825]: time="2025-09-17 00:05:12.688750090Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a3eb77c0-3f1d-4c5d-aeb2-b93dbed0877f name=/runtime.v1.RuntimeService/Version
	Sep 17 00:05:12 addons-772113 crio[825]: time="2025-09-17 00:05:12.688874433Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a3eb77c0-3f1d-4c5d-aeb2-b93dbed0877f name=/runtime.v1.RuntimeService/Version
	Sep 17 00:05:12 addons-772113 crio[825]: time="2025-09-17 00:05:12.690564641Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=03fbddfd-5c48-4d2c-8c87-fc7715dbf196 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 00:05:12 addons-772113 crio[825]: time="2025-09-17 00:05:12.691833689Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758067512691805486,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596879,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=03fbddfd-5c48-4d2c-8c87-fc7715dbf196 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 00:05:12 addons-772113 crio[825]: time="2025-09-17 00:05:12.693345403Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=04012cc4-2430-47f2-a9fd-8c8103eb2a5b name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 00:05:12 addons-772113 crio[825]: time="2025-09-17 00:05:12.693430337Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=04012cc4-2430-47f2-a9fd-8c8103eb2a5b name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 00:05:12 addons-772113 crio[825]: time="2025-09-17 00:05:12.694013126Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b3f9817b7afed2279934c5a935afc5f87ea34faa2ba7b48b93a432caf9911c2d,PodSandboxId:3a677e98fdcfe953fc28286c10f3021c97339f8784686ff50a9b3ae6dcab2d15,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9,State:CONTAINER_RUNNING,CreatedAt:1758067370662915960,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 47ae305d-ca3e-4058-a73e-7fbde8abf594,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbf230de62470f581feee9c7d9d13dd58301f7edbddd95bd1926001e9ff099bc,PodSandboxId:00118d4a08fd6c53cba1cefc9db0e96d607dfb8b3d12dd087064f2f43e77d0a2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1758067317191641144,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eac57424-ccea-45eb-a612-1e6f0b0fc281,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e951934e3cba2bdd0d8a5eeb48b8fb05e16c351c1ddcdfa4e14b98c90b05b56,PodSandboxId:cec0f13cc80ff023ec505650f662958b767ceed90f847d5a7377cec6d656769b,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1758067307171257479,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-ctrt2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 141750b1-cfaf-4d96-9960-ad324ef033cf,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:b933ac39ad371319ee2f11983ef66aa20020367282c5ebe97b0b5c09ce3f8946,PodSandboxId:c402290bae9353f64db55ab7bd552a47bcc1202c8978c93fd34cad4743432c23,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Sta
te:CONTAINER_EXITED,CreatedAt:1758067248398063786,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-hk8np,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a38f38a3-4ad2-48ca-86dc-79da71258ad2,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9a8e65cc284050cc15fcdc541d18742738708564313fdbbebc7ff9eafe753e5,PodSandboxId:7b42178063192d41f028f436695f90055e3758b8f54e84e5470c5573404bed22,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d1697145
49c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1758067247283903672,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-9znr2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4475e247-ba5e-44e9-95f3-10fc3ddd1aeb,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46493a2b014479e93202fbe43723b0a7a650eeb9412ac71778599699ae7b0520,PodSandboxId:39ed4283b599cefbcd2d244701c5beaa5c9324873cbd2789404aaf7d44e52661,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:966
0a1727a97702fd80cef66da2e074d17d2e33bd086736d1ebdc7fc6ccd3441,State:CONTAINER_RUNNING,CreatedAt:1758067236651394621,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-2zjwp,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 58263e6e-d425-489e-ad7b-499cdfd090f5,},Annotations:map[string]string{io.kubernetes.container.hash: 2616a42b,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1843391d60b5f978b498c6ac3eb0560b0325552e15889103a1ab8c2f0c41f3be,PodSandboxId:f2aef03b8e4d6ddfe528d5142a8708a2dc1bad931f76e8b98276f03f7d67907e,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c88
0e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1758067197522248325,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eb561e9-de5c-434a-adbd-c236698880bd,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a17e55359515f4ac3317d1efab37f38d7a9b656a9e6821d64c9415b16be7cf72,PodSandboxId:347320fdd98461135fcb6c25b523425e7381e2c8e3f407896ef8cc0c8783b529,Metadata:&ContainerMetadata{Name:amd-gpu
-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1758067181367254003,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-7jbw2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d933c91f-0ea0-4c08-b2b3-101f533f2b2e,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:591afd015554d5680abab1850869083adef6cdfc8e8224dc3cb0701810821383,PodSandboxId:e51a6efc1f19b520becc092616e2b50df2a82094901cb5cd4235e0c29b96a761,
Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1758067160588441992,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5afd1bf-1409-4482-8fc7-ce0c3fbb8435,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02b28e4a4ee7058988fd781f9b4ce4676fa6f5b445f662f9cc921e30427f7cce,PodSandboxId:5f01da8ab6d33f6fed3d1e6e103bc5851468867ef8eac30c23e5a6586fb2fb79,Metadata:&Co
ntainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1758067152653759472,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-fdh2c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74ded845-73df-4942-848f-8820953008f6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e0e44c311f791915b374520487cd97c39e16ab2d64f9b8043b8bc5f4be453ab,PodSandboxId:30d210ffbba2133d5b840ea13b2d710479eb51836e77f5b8a836ed668691d5d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1758067151905890572,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2kklh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ccfffa7-61ec-4054-939e-7f4697c59aba,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b641d341ad0fb32498d4bc38921c1f5fa0118f8ad6c1996c47f8c5ea8695bdf9,PodSandboxId:789a3f23ede662dd3e0345df78aae8aded7644ac21f06311f92380dfe5aabc8c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1758067140046719418,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-772113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72c977716c2a213edf403423c1d994de,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPor
t\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7af430f49a307765efa81a2e58d27dd1884d71b3642027993ea89193e4382485,PodSandboxId:fd20dfacd6d068e1858fe6e2d2723139df82f1de47492d78bcab31877f0dd93e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1758067140030190139,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-772113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de14323d77445415967489e9bc2b9259,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a6c33dde35735e1d231ebb1d5f8661451016dd53aa45aaae37879b785d733d9,PodSandboxId:3a61d9222cf2fbd38126710da3e45346525e4d5afbad6e9cc1b4fe0ce22bcd6b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1758067140024420659,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-772113,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 8e3f3db1c9264cdaf504ecceede9f845,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6019ddb1f64b8ffc13fe7d0699a4cfa5b01d4a05000bd7d8a3ba80cee1c56dc5,PodSandboxId:34a4e8f079f1a8a016c90e81289a54d830b4318632cf5f8aa9afe19c7732e429,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1758067139974349007,Labels:map[string]string{io.kubernetes.container
.name: etcd,io.kubernetes.pod.name: etcd-addons-772113,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bdc197ef501906b0a6dbef768d6c654,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=04012cc4-2430-47f2-a9fd-8c8103eb2a5b name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 00:05:12 addons-772113 crio[825]: time="2025-09-17 00:05:12.697789746Z" level=debug msg="Reading /var/lib/containers/sigstore/kicbase/echo-server@sha256=a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86/signature-1" file="docker/docker_image_src.go:479"
	Sep 17 00:05:12 addons-772113 crio[825]: time="2025-09-17 00:05:12.698090655Z" level=debug msg="Not looking for sigstore attachments: disabled by configuration" file="docker/docker_image_src.go:556"
	Sep 17 00:05:12 addons-772113 crio[825]: time="2025-09-17 00:05:12.698131342Z" level=debug msg="Manifest has MIME type application/vnd.docker.distribution.manifest.v2+json, ordered candidate list [application/vnd.docker.distribution.manifest.v2+json, application/vnd.docker.distribution.manifest.v1+prettyjws, application/vnd.oci.image.manifest.v1+json, application/vnd.docker.distribution.manifest.v1+json]" file="copy/manifest.go:158"
	Sep 17 00:05:12 addons-772113 crio[825]: time="2025-09-17 00:05:12.698174618Z" level=debug msg="... will first try using the original manifest unmodified" file="copy/manifest.go:168"
	Sep 17 00:05:12 addons-772113 crio[825]: time="2025-09-17 00:05:12.698387146Z" level=debug msg="Checking if we can reuse blob sha256:a055a10ed683d0944c17c642f7cf3259b524ceb32317ec887513b018e67aed1e: general substitution = true, compression for MIME type \"application/vnd.docker.image.rootfs.diff.tar.gzip\" = true" file="copy/single.go:681"
	Sep 17 00:05:12 addons-772113 crio[825]: time="2025-09-17 00:05:12.701259496Z" level=debug msg="Failed to retrieve partial blob: convert_images not configured" file="copy/single.go:756"
	Sep 17 00:05:12 addons-772113 crio[825]: time="2025-09-17 00:05:12.701563253Z" level=debug msg="Downloading /v2/kicbase/echo-server/blobs/sha256:a055a10ed683d0944c17c642f7cf3259b524ceb32317ec887513b018e67aed1e" file="docker/docker_client.go:1038"
	Sep 17 00:05:12 addons-772113 crio[825]: time="2025-09-17 00:05:12.701681946Z" level=debug msg="GET https://registry-1.docker.io/v2/kicbase/echo-server/blobs/sha256:a055a10ed683d0944c17c642f7cf3259b524ceb32317ec887513b018e67aed1e" file="docker/docker_client.go:631"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b3f9817b7afed       docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8                              2 minutes ago       Running             nginx                     0                   3a677e98fdcfe       nginx
	cbf230de62470       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   00118d4a08fd6       busybox
	6e951934e3cba       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef             3 minutes ago       Running             controller                0                   cec0f13cc80ff       ingress-nginx-controller-9cc49f96f-ctrt2
	b933ac39ad371       8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65                                                             4 minutes ago       Exited              patch                     1                   c402290bae935       ingress-nginx-admission-patch-hk8np
	b9a8e65cc2840       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24   4 minutes ago       Exited              create                    0                   7b42178063192       ingress-nginx-admission-create-9znr2
	46493a2b01447       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5            4 minutes ago       Running             gadget                    0                   39ed4283b599c       gadget-2zjwp
	1843391d60b5f       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               5 minutes ago       Running             minikube-ingress-dns      0                   f2aef03b8e4d6       kube-ingress-dns-minikube
	a17e55359515f       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     5 minutes ago       Running             amd-gpu-device-plugin     0                   347320fdd9846       amd-gpu-device-plugin-7jbw2
	591afd015554d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago       Running             storage-provisioner       0                   e51a6efc1f19b       storage-provisioner
	02b28e4a4ee70       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             6 minutes ago       Running             coredns                   0                   5f01da8ab6d33       coredns-66bc5c9577-fdh2c
	9e0e44c311f79       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                                             6 minutes ago       Running             kube-proxy                0                   30d210ffbba21       kube-proxy-2kklh
	b641d341ad0fb       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                                             6 minutes ago       Running             kube-scheduler            0                   789a3f23ede66       kube-scheduler-addons-772113
	7af430f49a307       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                                             6 minutes ago       Running             kube-controller-manager   0                   fd20dfacd6d06       kube-controller-manager-addons-772113
	8a6c33dde3573       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                                             6 minutes ago       Running             kube-apiserver            0                   3a61d9222cf2f       kube-apiserver-addons-772113
	6019ddb1f64b8       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                             6 minutes ago       Running             etcd                      0                   34a4e8f079f1a       etcd-addons-772113
	
	
	==> coredns [02b28e4a4ee7058988fd781f9b4ce4676fa6f5b445f662f9cc921e30427f7cce] <==
	[INFO] 10.244.0.8:38393 - 5034 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.001451489s
	[INFO] 10.244.0.8:38393 - 35884 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000325349s
	[INFO] 10.244.0.8:38393 - 36426 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000148324s
	[INFO] 10.244.0.8:38393 - 31063 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000360494s
	[INFO] 10.244.0.8:38393 - 57932 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.001626692s
	[INFO] 10.244.0.8:38393 - 26881 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.00016618s
	[INFO] 10.244.0.8:38393 - 15068 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.001160996s
	[INFO] 10.244.0.8:49884 - 46523 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000262099s
	[INFO] 10.244.0.8:49884 - 46812 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000228705s
	[INFO] 10.244.0.8:44961 - 32325 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000170005s
	[INFO] 10.244.0.8:44961 - 32597 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000149065s
	[INFO] 10.244.0.8:53935 - 47492 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000108026s
	[INFO] 10.244.0.8:53935 - 47709 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000161184s
	[INFO] 10.244.0.8:60720 - 58377 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000129263s
	[INFO] 10.244.0.8:60720 - 58853 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00015411s
	[INFO] 10.244.0.23:54933 - 3753 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00053833s
	[INFO] 10.244.0.23:50562 - 56121 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000174955s
	[INFO] 10.244.0.23:57725 - 1363 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000149758s
	[INFO] 10.244.0.23:42416 - 33064 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.001270281s
	[INFO] 10.244.0.23:47682 - 34749 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000120429s
	[INFO] 10.244.0.23:60420 - 37543 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000710414s
	[INFO] 10.244.0.23:48742 - 21070 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.004839324s
	[INFO] 10.244.0.23:36228 - 62389 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.006513276s
	[INFO] 10.244.0.27:41159 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000386799s
	[INFO] 10.244.0.27:60634 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.002833045s
	
	
	==> describe nodes <==
	Name:               addons-772113
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-772113
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=addons-772113
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_16T23_59_06_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-772113
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Sep 2025 23:59:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-772113
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:05:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:03:11 +0000   Tue, 16 Sep 2025 23:59:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:03:11 +0000   Tue, 16 Sep 2025 23:59:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:03:11 +0000   Tue, 16 Sep 2025 23:59:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:03:11 +0000   Tue, 16 Sep 2025 23:59:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.205
	  Hostname:    addons-772113
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef75d235f9224e16bc5c4ff1216a6de0
	  System UUID:                ef75d235-f922-4e16-bc5c-4ff1216a6de0
	  Boot ID:                    58e5e176-0696-4ddc-adc4-c21b65ecb198
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m20s
	  default                     hello-world-app-5d498dc89-6bnss             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m41s
	  gadget                      gadget-2zjwp                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m54s
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-ctrt2    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         5m53s
	  kube-system                 amd-gpu-device-plugin-7jbw2                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m59s
	  kube-system                 coredns-66bc5c9577-fdh2c                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     6m2s
	  kube-system                 etcd-addons-772113                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         6m8s
	  kube-system                 kube-apiserver-addons-772113                250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m8s
	  kube-system                 kube-controller-manager-addons-772113       200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m9s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m56s
	  kube-system                 kube-proxy-2kklh                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m2s
	  kube-system                 kube-scheduler-addons-772113                100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m8s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m59s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  6m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m14s (x8 over 6m15s)  kubelet          Node addons-772113 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m14s (x8 over 6m15s)  kubelet          Node addons-772113 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m14s (x7 over 6m15s)  kubelet          Node addons-772113 status is now: NodeHasSufficientPID
	  Normal  Starting                 6m8s                   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m8s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m8s                   kubelet          Node addons-772113 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m8s                   kubelet          Node addons-772113 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m8s                   kubelet          Node addons-772113 status is now: NodeHasSufficientPID
	  Normal  NodeReady                6m7s                   kubelet          Node addons-772113 status is now: NodeReady
	  Normal  RegisteredNode           6m3s                   node-controller  Node addons-772113 event: Registered Node addons-772113 in Controller
	
	
	==> dmesg <==
	[ +12.546861] kauditd_printk_skb: 67 callbacks suppressed
	[  +6.914360] kauditd_printk_skb: 20 callbacks suppressed
	[ +11.415649] kauditd_printk_skb: 38 callbacks suppressed
	[Sep17 00:00] kauditd_printk_skb: 20 callbacks suppressed
	[ +33.419009] kauditd_printk_skb: 32 callbacks suppressed
	[  +6.387382] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.574262] kauditd_printk_skb: 71 callbacks suppressed
	[  +3.763042] kauditd_printk_skb: 121 callbacks suppressed
	[Sep17 00:01] kauditd_printk_skb: 96 callbacks suppressed
	[  +0.000050] kauditd_printk_skb: 5 callbacks suppressed
	[  +0.000094] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.350701] kauditd_printk_skb: 68 callbacks suppressed
	[  +3.537725] kauditd_printk_skb: 32 callbacks suppressed
	[Sep17 00:02] kauditd_printk_skb: 17 callbacks suppressed
	[  +0.000170] kauditd_printk_skb: 22 callbacks suppressed
	[  +0.022739] kauditd_printk_skb: 67 callbacks suppressed
	[  +0.116715] kauditd_printk_skb: 122 callbacks suppressed
	[  +0.023745] kauditd_printk_skb: 215 callbacks suppressed
	[  +0.334676] kauditd_printk_skb: 78 callbacks suppressed
	[  +7.173480] kauditd_printk_skb: 25 callbacks suppressed
	[  +7.274845] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.982501] kauditd_printk_skb: 58 callbacks suppressed
	[Sep17 00:03] kauditd_printk_skb: 30 callbacks suppressed
	[  +7.553586] kauditd_printk_skb: 41 callbacks suppressed
	[Sep17 00:05] kauditd_printk_skb: 127 callbacks suppressed
	
	
	==> etcd [6019ddb1f64b8ffc13fe7d0699a4cfa5b01d4a05000bd7d8a3ba80cee1c56dc5] <==
	{"level":"warn","ts":"2025-09-16T23:59:57.375471Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"123.937482ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/gcp-auth/gcp-auth-78565c9fb4-gpp7s.1865e8c5bb3546c9\" limit:1 ","response":"range_response_count:1 size:784"}
	{"level":"info","ts":"2025-09-16T23:59:57.375505Z","caller":"traceutil/trace.go:172","msg":"trace[346640489] range","detail":"{range_begin:/registry/events/gcp-auth/gcp-auth-78565c9fb4-gpp7s.1865e8c5bb3546c9; range_end:; response_count:1; response_revision:979; }","duration":"123.977709ms","start":"2025-09-16T23:59:57.251518Z","end":"2025-09-16T23:59:57.375496Z","steps":["trace[346640489] 'agreement among raft nodes before linearized reading'  (duration: 123.868818ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-16T23:59:57.375650Z","caller":"traceutil/trace.go:172","msg":"trace[1579117879] transaction","detail":"{read_only:false; response_revision:979; number_of_response:1; }","duration":"216.702636ms","start":"2025-09-16T23:59:57.158939Z","end":"2025-09-16T23:59:57.375641Z","steps":["trace[1579117879] 'process raft request'  (duration: 216.239124ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-17T00:00:03.092008Z","caller":"traceutil/trace.go:172","msg":"trace[1797693684] linearizableReadLoop","detail":"{readStateIndex:1021; appliedIndex:1021; }","duration":"144.511432ms","start":"2025-09-17T00:00:02.947384Z","end":"2025-09-17T00:00:03.091896Z","steps":["trace[1797693684] 'read index received'  (duration: 144.504771ms)","trace[1797693684] 'applied index is now lower than readState.Index'  (duration: 5.626µs)"],"step_count":2}
	{"level":"info","ts":"2025-09-17T00:00:03.092052Z","caller":"traceutil/trace.go:172","msg":"trace[1347173145] transaction","detail":"{read_only:false; response_revision:994; number_of_response:1; }","duration":"257.816409ms","start":"2025-09-17T00:00:02.834225Z","end":"2025-09-17T00:00:03.092041Z","steps":["trace[1347173145] 'process raft request'  (duration: 257.718675ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-17T00:00:03.092341Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"129.928348ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/prioritylevelconfigurations\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-17T00:00:03.092369Z","caller":"traceutil/trace.go:172","msg":"trace[1458229553] range","detail":"{range_begin:/registry/prioritylevelconfigurations; range_end:; response_count:0; response_revision:994; }","duration":"129.959544ms","start":"2025-09-17T00:00:02.962403Z","end":"2025-09-17T00:00:03.092362Z","steps":["trace[1458229553] 'agreement among raft nodes before linearized reading'  (duration: 129.818996ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-17T00:00:03.092561Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"112.579346ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-17T00:00:03.092602Z","caller":"traceutil/trace.go:172","msg":"trace[1909742574] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:994; }","duration":"112.621545ms","start":"2025-09-17T00:00:02.979975Z","end":"2025-09-17T00:00:03.092597Z","steps":["trace[1909742574] 'agreement among raft nodes before linearized reading'  (duration: 112.56197ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-17T00:00:03.092116Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"144.715972ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-17T00:00:03.092833Z","caller":"traceutil/trace.go:172","msg":"trace[226524321] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:994; }","duration":"145.444116ms","start":"2025-09-17T00:00:02.947379Z","end":"2025-09-17T00:00:03.092823Z","steps":["trace[226524321] 'agreement among raft nodes before linearized reading'  (duration: 144.700318ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-17T00:00:35.042136Z","caller":"traceutil/trace.go:172","msg":"trace[795313552] transaction","detail":"{read_only:false; response_revision:1044; number_of_response:1; }","duration":"134.714216ms","start":"2025-09-17T00:00:34.907403Z","end":"2025-09-17T00:00:35.042117Z","steps":["trace[795313552] 'process raft request'  (duration: 118.834818ms)","trace[795313552] 'compare'  (duration: 14.602795ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-17T00:00:41.975855Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"322.317015ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-17T00:00:41.976858Z","caller":"traceutil/trace.go:172","msg":"trace[696790360] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1057; }","duration":"323.255478ms","start":"2025-09-17T00:00:41.653521Z","end":"2025-09-17T00:00:41.976776Z","steps":["trace[696790360] 'range keys from in-memory index tree'  (duration: 322.248086ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-17T00:00:41.977125Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-17T00:00:41.653504Z","time spent":"323.387813ms","remote":"127.0.0.1:58518","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2025-09-17T00:01:32.303805Z","caller":"traceutil/trace.go:172","msg":"trace[1118732127] transaction","detail":"{read_only:false; response_revision:1244; number_of_response:1; }","duration":"174.197617ms","start":"2025-09-17T00:01:32.129579Z","end":"2025-09-17T00:01:32.303777Z","steps":["trace[1118732127] 'process raft request'  (duration: 174.082679ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-17T00:02:38.482823Z","caller":"traceutil/trace.go:172","msg":"trace[1471431144] transaction","detail":"{read_only:false; response_revision:1657; number_of_response:1; }","duration":"178.655915ms","start":"2025-09-17T00:02:38.304064Z","end":"2025-09-17T00:02:38.482720Z","steps":["trace[1471431144] 'process raft request'  (duration: 178.021966ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-17T00:02:43.973965Z","caller":"traceutil/trace.go:172","msg":"trace[329658997] transaction","detail":"{read_only:false; response_revision:1676; number_of_response:1; }","duration":"363.264889ms","start":"2025-09-17T00:02:43.610684Z","end":"2025-09-17T00:02:43.973949Z","steps":["trace[329658997] 'process raft request'  (duration: 363.115477ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-17T00:02:43.975543Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-17T00:02:43.610666Z","time spent":"364.589496ms","remote":"127.0.0.1:58464","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1674 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2025-09-17T00:02:50.526029Z","caller":"traceutil/trace.go:172","msg":"trace[708765803] linearizableReadLoop","detail":"{readStateIndex:1775; appliedIndex:1775; }","duration":"145.070704ms","start":"2025-09-17T00:02:50.380915Z","end":"2025-09-17T00:02:50.525986Z","steps":["trace[708765803] 'read index received'  (duration: 145.065759ms)","trace[708765803] 'applied index is now lower than readState.Index'  (duration: 4.271µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-17T00:02:50.531930Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"130.853963ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" limit:1 ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2025-09-17T00:02:50.532024Z","caller":"traceutil/trace.go:172","msg":"trace[344562464] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1705; }","duration":"130.961138ms","start":"2025-09-17T00:02:50.401043Z","end":"2025-09-17T00:02:50.532005Z","steps":["trace[344562464] 'agreement among raft nodes before linearized reading'  (duration: 130.765661ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-17T00:02:50.532155Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"151.075422ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/local-path-storage\" limit:1 ","response":"range_response_count:1 size:2050"}
	{"level":"info","ts":"2025-09-17T00:02:50.532191Z","caller":"traceutil/trace.go:172","msg":"trace[1102611035] range","detail":"{range_begin:/registry/namespaces/local-path-storage; range_end:; response_count:1; response_revision:1705; }","duration":"151.272237ms","start":"2025-09-17T00:02:50.380910Z","end":"2025-09-17T00:02:50.532183Z","steps":["trace[1102611035] 'agreement among raft nodes before linearized reading'  (duration: 145.42268ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-17T00:03:21.881536Z","caller":"traceutil/trace.go:172","msg":"trace[1863069943] transaction","detail":"{read_only:false; response_revision:1955; number_of_response:1; }","duration":"124.371025ms","start":"2025-09-17T00:03:21.757149Z","end":"2025-09-17T00:03:21.881520Z","steps":["trace[1863069943] 'process raft request'  (duration: 124.230308ms)"],"step_count":1}
	
	
	==> kernel <==
	 00:05:13 up 6 min,  0 users,  load average: 0.46, 1.20, 0.70
	Linux addons-772113 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Sep  9 02:24:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [8a6c33dde35735e1d231ebb1d5f8661451016dd53aa45aaae37879b785d733d9] <==
	E0917 00:02:03.785531       1 conn.go:339] Error on socket receive: read tcp 192.168.50.205:8443->192.168.50.1:48996: use of closed network connection
	I0917 00:02:26.465536       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.101.91.142"}
	I0917 00:02:32.276389       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I0917 00:02:32.496871       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.23.143"}
	E0917 00:02:39.448837       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0917 00:02:54.300980       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0917 00:02:56.543530       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:03:05.977215       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0917 00:03:11.428519       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0917 00:03:11.429522       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0917 00:03:11.476914       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0917 00:03:11.476981       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0917 00:03:11.494430       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0917 00:03:11.494480       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0917 00:03:11.524813       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0917 00:03:11.524873       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0917 00:03:11.550426       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0917 00:03:11.550545       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0917 00:03:12.316502       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0917 00:03:12.495804       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0917 00:03:12.551052       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0917 00:03:12.580564       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0917 00:04:12.046922       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:04:34.758677       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:05:11.089438       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.109.118.247"}
	
	
	==> kube-controller-manager [7af430f49a307765efa81a2e58d27dd1884d71b3642027993ea89193e4382485] <==
	E0917 00:03:28.186506       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0917 00:03:29.202134       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0917 00:03:29.203396       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0917 00:03:32.013668       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0917 00:03:32.014894       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	I0917 00:03:40.096165       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I0917 00:03:40.096317       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0917 00:03:40.241098       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I0917 00:03:40.241247       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0917 00:03:48.986415       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0917 00:03:48.987524       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0917 00:03:52.330159       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0917 00:03:52.331530       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0917 00:03:52.439226       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0917 00:03:52.440739       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0917 00:04:18.192015       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0917 00:04:18.193583       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0917 00:04:21.103374       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0917 00:04:21.104440       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0917 00:04:29.419181       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0917 00:04:29.420532       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0917 00:05:01.103356       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0917 00:05:01.104876       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0917 00:05:06.616641       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0917 00:05:06.618162       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [9e0e44c311f791915b374520487cd97c39e16ab2d64f9b8043b8bc5f4be453ab] <==
	I0916 23:59:12.815593       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0916 23:59:12.922418       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0916 23:59:12.922466       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.50.205"]
	E0916 23:59:12.922540       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 23:59:13.267622       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0916 23:59:13.267706       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0916 23:59:13.267731       1 server_linux.go:132] "Using iptables Proxier"
	I0916 23:59:13.294565       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 23:59:13.297003       1 server.go:527] "Version info" version="v1.34.0"
	I0916 23:59:13.297568       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 23:59:13.303885       1 config.go:200] "Starting service config controller"
	I0916 23:59:13.303924       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0916 23:59:13.303965       1 config.go:106] "Starting endpoint slice config controller"
	I0916 23:59:13.303970       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0916 23:59:13.303981       1 config.go:403] "Starting serviceCIDR config controller"
	I0916 23:59:13.303984       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0916 23:59:13.307781       1 config.go:309] "Starting node config controller"
	I0916 23:59:13.307815       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0916 23:59:13.307823       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0916 23:59:13.407014       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0916 23:59:13.408632       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0916 23:59:13.408722       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [b641d341ad0fb32498d4bc38921c1f5fa0118f8ad6c1996c47f8c5ea8695bdf9] <==
	E0916 23:59:02.979981       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0916 23:59:02.980057       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0916 23:59:02.980064       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0916 23:59:02.980179       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0916 23:59:02.980227       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0916 23:59:03.821828       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0916 23:59:03.876515       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0916 23:59:03.889458       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0916 23:59:03.954741       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0916 23:59:04.050390       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0916 23:59:04.057522       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0916 23:59:04.074464       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0916 23:59:04.156193       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0916 23:59:04.192085       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0916 23:59:04.240607       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0916 23:59:04.320660       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0916 23:59:04.320763       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0916 23:59:04.339193       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0916 23:59:04.348934       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0916 23:59:04.368896       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0916 23:59:04.395567       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0916 23:59:04.411040       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0916 23:59:04.412018       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0916 23:59:04.435260       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	I0916 23:59:06.764399       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 17 00:03:26 addons-772113 kubelet[1520]: E0917 00:03:26.351664    1520 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758067406350004834  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 17 00:03:26 addons-772113 kubelet[1520]: E0917 00:03:26.351716    1520 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758067406350004834  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 17 00:03:36 addons-772113 kubelet[1520]: E0917 00:03:36.358049    1520 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758067416356991797  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 17 00:03:36 addons-772113 kubelet[1520]: E0917 00:03:36.358134    1520 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758067416356991797  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 17 00:03:46 addons-772113 kubelet[1520]: E0917 00:03:46.362961    1520 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758067426362548951  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 17 00:03:46 addons-772113 kubelet[1520]: E0917 00:03:46.363022    1520 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758067426362548951  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 17 00:03:56 addons-772113 kubelet[1520]: E0917 00:03:56.365870    1520 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758067436365232657  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 17 00:03:56 addons-772113 kubelet[1520]: E0917 00:03:56.365902    1520 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758067436365232657  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 17 00:04:06 addons-772113 kubelet[1520]: E0917 00:04:06.369224    1520 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758067446368641414  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 17 00:04:06 addons-772113 kubelet[1520]: E0917 00:04:06.369315    1520 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758067446368641414  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 17 00:04:16 addons-772113 kubelet[1520]: E0917 00:04:16.373384    1520 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758067456372459869  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 17 00:04:16 addons-772113 kubelet[1520]: E0917 00:04:16.373413    1520 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758067456372459869  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 17 00:04:26 addons-772113 kubelet[1520]: E0917 00:04:26.377504    1520 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758067466376940232  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 17 00:04:26 addons-772113 kubelet[1520]: E0917 00:04:26.377565    1520 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758067466376940232  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 17 00:04:33 addons-772113 kubelet[1520]: I0917 00:04:33.768606    1520 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Sep 17 00:04:36 addons-772113 kubelet[1520]: E0917 00:04:36.381095    1520 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758067476380529834  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 17 00:04:36 addons-772113 kubelet[1520]: E0917 00:04:36.381124    1520 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758067476380529834  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 17 00:04:45 addons-772113 kubelet[1520]: I0917 00:04:45.770630    1520 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-7jbw2" secret="" err="secret \"gcp-auth\" not found"
	Sep 17 00:04:46 addons-772113 kubelet[1520]: E0917 00:04:46.385623    1520 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758067486384970414  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 17 00:04:46 addons-772113 kubelet[1520]: E0917 00:04:46.385671    1520 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758067486384970414  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 17 00:04:56 addons-772113 kubelet[1520]: E0917 00:04:56.389067    1520 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758067496388345736  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 17 00:04:56 addons-772113 kubelet[1520]: E0917 00:04:56.389096    1520 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758067496388345736  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 17 00:05:06 addons-772113 kubelet[1520]: E0917 00:05:06.392683    1520 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758067506392002407  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 17 00:05:06 addons-772113 kubelet[1520]: E0917 00:05:06.392731    1520 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758067506392002407  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 17 00:05:11 addons-772113 kubelet[1520]: I0917 00:05:11.077693    1520 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgr2w\" (UniqueName: \"kubernetes.io/projected/fced18ee-7c26-438c-a638-6cb7e6cbc627-kube-api-access-zgr2w\") pod \"hello-world-app-5d498dc89-6bnss\" (UID: \"fced18ee-7c26-438c-a638-6cb7e6cbc627\") " pod="default/hello-world-app-5d498dc89-6bnss"
	
	
	==> storage-provisioner [591afd015554d5680abab1850869083adef6cdfc8e8224dc3cb0701810821383] <==
	W0917 00:04:48.859550       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:04:50.863642       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:04:50.874000       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:04:52.877672       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:04:52.884666       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:04:54.889612       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:04:54.902713       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:04:56.906992       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:04:56.913427       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:04:58.916812       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:04:58.926937       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:05:00.930799       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:05:00.937146       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:05:02.941413       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:05:02.950694       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:05:04.953632       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:05:04.959818       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:05:06.963881       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:05:06.973517       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:05:08.977052       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:05:08.984132       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:05:11.000694       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:05:11.040628       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:05:13.053576       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:05:13.064574       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-772113 -n addons-772113
helpers_test.go:269: (dbg) Run:  kubectl --context addons-772113 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-6bnss ingress-nginx-admission-create-9znr2 ingress-nginx-admission-patch-hk8np
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-772113 describe pod hello-world-app-5d498dc89-6bnss ingress-nginx-admission-create-9znr2 ingress-nginx-admission-patch-hk8np
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-772113 describe pod hello-world-app-5d498dc89-6bnss ingress-nginx-admission-create-9znr2 ingress-nginx-admission-patch-hk8np: exit status 1 (100.250728ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-6bnss
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-772113/192.168.50.205
	Start Time:       Wed, 17 Sep 2025 00:05:11 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Running
	IP:               10.244.0.33
	IPs:
	  IP:           10.244.0.33
	Controlled By:  ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   cri-o://22e37bfd7eb1f45c972a1d6e3b3258f82c0e8611adff22c82072835e7dfb532a
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Running
	      Started:      Wed, 17 Sep 2025 00:05:13 +0000
	    Ready:          True
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zgr2w (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       True 
	  ContainersReady             True 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-zgr2w:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-6bnss to addons-772113
	  Normal  Pulling    3s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"
	  Normal  Pulled     1s    kubelet            Successfully pulled image "docker.io/kicbase/echo-server:1.0" in 1.454s (1.454s including waiting). Image size: 4944818 bytes.
	  Normal  Created    1s    kubelet            Created container: hello-world-app
	  Normal  Started    1s    kubelet            Started container hello-world-app

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-9znr2" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-hk8np" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-772113 describe pod hello-world-app-5d498dc89-6bnss ingress-nginx-admission-create-9znr2 ingress-nginx-admission-patch-hk8np: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-772113 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-772113 addons disable ingress-dns --alsologtostderr -v=1: (1.330698635s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-772113 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-772113 addons disable ingress --alsologtostderr -v=1: (7.951881047s)
--- FAIL: TestAddons/parallel/Ingress (171.61s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (302.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-456067 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-456067 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-456067 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-456067 --alsologtostderr -v=1] stderr:
I0917 00:11:14.005289  153649 out.go:360] Setting OutFile to fd 1 ...
I0917 00:11:14.005585  153649 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0917 00:11:14.005594  153649 out.go:374] Setting ErrFile to fd 2...
I0917 00:11:14.005598  153649 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0917 00:11:14.005779  153649 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-141589/.minikube/bin
I0917 00:11:14.006094  153649 mustload.go:65] Loading cluster: functional-456067
I0917 00:11:14.006433  153649 config.go:182] Loaded profile config "functional-456067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0917 00:11:14.006822  153649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0917 00:11:14.006909  153649 main.go:141] libmachine: Launching plugin server for driver kvm2
I0917 00:11:14.023558  153649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44413
I0917 00:11:14.024087  153649 main.go:141] libmachine: () Calling .GetVersion
I0917 00:11:14.024681  153649 main.go:141] libmachine: Using API Version  1
I0917 00:11:14.024714  153649 main.go:141] libmachine: () Calling .SetConfigRaw
I0917 00:11:14.025079  153649 main.go:141] libmachine: () Calling .GetMachineName
I0917 00:11:14.025313  153649 main.go:141] libmachine: (functional-456067) Calling .GetState
I0917 00:11:14.027002  153649 host.go:66] Checking if "functional-456067" exists ...
I0917 00:11:14.027302  153649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0917 00:11:14.027339  153649 main.go:141] libmachine: Launching plugin server for driver kvm2
I0917 00:11:14.041406  153649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34977
I0917 00:11:14.041954  153649 main.go:141] libmachine: () Calling .GetVersion
I0917 00:11:14.042434  153649 main.go:141] libmachine: Using API Version  1
I0917 00:11:14.042456  153649 main.go:141] libmachine: () Calling .SetConfigRaw
I0917 00:11:14.042811  153649 main.go:141] libmachine: () Calling .GetMachineName
I0917 00:11:14.043111  153649 main.go:141] libmachine: (functional-456067) Calling .DriverName
I0917 00:11:14.043286  153649 api_server.go:166] Checking apiserver status ...
I0917 00:11:14.043363  153649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0917 00:11:14.043394  153649 main.go:141] libmachine: (functional-456067) Calling .GetSSHHostname
I0917 00:11:14.046568  153649 main.go:141] libmachine: (functional-456067) DBG | domain functional-456067 has defined MAC address 52:54:00:03:de:c7 in network mk-functional-456067
I0917 00:11:14.047039  153649 main.go:141] libmachine: (functional-456067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:de:c7", ip: ""} in network mk-functional-456067: {Iface:virbr2 ExpiryTime:2025-09-17 01:08:00 +0000 UTC Type:0 Mac:52:54:00:03:de:c7 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:functional-456067 Clientid:01:52:54:00:03:de:c7}
I0917 00:11:14.047074  153649 main.go:141] libmachine: (functional-456067) DBG | domain functional-456067 has defined IP address 192.168.50.44 and MAC address 52:54:00:03:de:c7 in network mk-functional-456067
I0917 00:11:14.047263  153649 main.go:141] libmachine: (functional-456067) Calling .GetSSHPort
I0917 00:11:14.047426  153649 main.go:141] libmachine: (functional-456067) Calling .GetSSHKeyPath
I0917 00:11:14.047702  153649 main.go:141] libmachine: (functional-456067) Calling .GetSSHUsername
I0917 00:11:14.047843  153649 sshutil.go:53] new ssh client: &{IP:192.168.50.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21550-141589/.minikube/machines/functional-456067/id_rsa Username:docker}
I0917 00:11:14.137980  153649 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/6700/cgroup
W0917 00:11:14.151081  153649 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/6700/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I0917 00:11:14.151179  153649 ssh_runner.go:195] Run: ls
I0917 00:11:14.156571  153649 api_server.go:253] Checking apiserver healthz at https://192.168.50.44:8441/healthz ...
I0917 00:11:14.161614  153649 api_server.go:279] https://192.168.50.44:8441/healthz returned 200:
ok
W0917 00:11:14.161676  153649 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I0917 00:11:14.161842  153649 config.go:182] Loaded profile config "functional-456067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0917 00:11:14.161884  153649 addons.go:69] Setting dashboard=true in profile "functional-456067"
I0917 00:11:14.161904  153649 addons.go:238] Setting addon dashboard=true in "functional-456067"
I0917 00:11:14.161940  153649 host.go:66] Checking if "functional-456067" exists ...
I0917 00:11:14.162209  153649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0917 00:11:14.162256  153649 main.go:141] libmachine: Launching plugin server for driver kvm2
I0917 00:11:14.176353  153649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44311
I0917 00:11:14.176918  153649 main.go:141] libmachine: () Calling .GetVersion
I0917 00:11:14.177520  153649 main.go:141] libmachine: Using API Version  1
I0917 00:11:14.177548  153649 main.go:141] libmachine: () Calling .SetConfigRaw
I0917 00:11:14.177946  153649 main.go:141] libmachine: () Calling .GetMachineName
I0917 00:11:14.178600  153649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0917 00:11:14.178651  153649 main.go:141] libmachine: Launching plugin server for driver kvm2
I0917 00:11:14.193370  153649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38397
I0917 00:11:14.193830  153649 main.go:141] libmachine: () Calling .GetVersion
I0917 00:11:14.194274  153649 main.go:141] libmachine: Using API Version  1
I0917 00:11:14.194292  153649 main.go:141] libmachine: () Calling .SetConfigRaw
I0917 00:11:14.194668  153649 main.go:141] libmachine: () Calling .GetMachineName
I0917 00:11:14.194880  153649 main.go:141] libmachine: (functional-456067) Calling .GetState
I0917 00:11:14.197106  153649 main.go:141] libmachine: (functional-456067) Calling .DriverName
I0917 00:11:14.202573  153649 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0917 00:11:14.205945  153649 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I0917 00:11:14.210835  153649 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0917 00:11:14.210873  153649 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0917 00:11:14.210906  153649 main.go:141] libmachine: (functional-456067) Calling .GetSSHHostname
I0917 00:11:14.216567  153649 main.go:141] libmachine: (functional-456067) DBG | domain functional-456067 has defined MAC address 52:54:00:03:de:c7 in network mk-functional-456067
I0917 00:11:14.217114  153649 main.go:141] libmachine: (functional-456067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:de:c7", ip: ""} in network mk-functional-456067: {Iface:virbr2 ExpiryTime:2025-09-17 01:08:00 +0000 UTC Type:0 Mac:52:54:00:03:de:c7 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:functional-456067 Clientid:01:52:54:00:03:de:c7}
I0917 00:11:14.217147  153649 main.go:141] libmachine: (functional-456067) DBG | domain functional-456067 has defined IP address 192.168.50.44 and MAC address 52:54:00:03:de:c7 in network mk-functional-456067
I0917 00:11:14.217411  153649 main.go:141] libmachine: (functional-456067) Calling .GetSSHPort
I0917 00:11:14.217661  153649 main.go:141] libmachine: (functional-456067) Calling .GetSSHKeyPath
I0917 00:11:14.217836  153649 main.go:141] libmachine: (functional-456067) Calling .GetSSHUsername
I0917 00:11:14.218004  153649 sshutil.go:53] new ssh client: &{IP:192.168.50.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21550-141589/.minikube/machines/functional-456067/id_rsa Username:docker}
I0917 00:11:14.320198  153649 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0917 00:11:14.320231  153649 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0917 00:11:14.344040  153649 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0917 00:11:14.344070  153649 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0917 00:11:14.369520  153649 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0917 00:11:14.369548  153649 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0917 00:11:14.393070  153649 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0917 00:11:14.393103  153649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I0917 00:11:14.424108  153649 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0917 00:11:14.424138  153649 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0917 00:11:14.451573  153649 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0917 00:11:14.451599  153649 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0917 00:11:14.480362  153649 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0917 00:11:14.480387  153649 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0917 00:11:14.512154  153649 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0917 00:11:14.512187  153649 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0917 00:11:14.535502  153649 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0917 00:11:14.535537  153649 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0917 00:11:14.558797  153649 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0917 00:11:15.438148  153649 main.go:141] libmachine: Making call to close driver server
I0917 00:11:15.438177  153649 main.go:141] libmachine: (functional-456067) Calling .Close
I0917 00:11:15.438492  153649 main.go:141] libmachine: Successfully made call to close driver server
I0917 00:11:15.438516  153649 main.go:141] libmachine: Making call to close connection to plugin binary
I0917 00:11:15.438517  153649 main.go:141] libmachine: (functional-456067) DBG | Closing plugin on server side
I0917 00:11:15.438531  153649 main.go:141] libmachine: Making call to close driver server
I0917 00:11:15.438619  153649 main.go:141] libmachine: (functional-456067) Calling .Close
I0917 00:11:15.438875  153649 main.go:141] libmachine: Successfully made call to close driver server
I0917 00:11:15.438891  153649 main.go:141] libmachine: Making call to close connection to plugin binary
I0917 00:11:15.440511  153649 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-456067 addons enable metrics-server

                                                
                                                
I0917 00:11:15.441978  153649 addons.go:201] Writing out "functional-456067" config to set dashboard=true...
W0917 00:11:15.442295  153649 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I0917 00:11:15.443302  153649 kapi.go:59] client config for functional-456067: &rest.Config{Host:"https://192.168.50.44:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-141589/.minikube/profiles/functional-456067/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-141589/.minikube/profiles/functional-456067/client.key", CAFile:"/home/jenkins/minikube-integration/21550-141589/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0917 00:11:15.444025  153649 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I0917 00:11:15.444069  153649 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I0917 00:11:15.444087  153649 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I0917 00:11:15.444097  153649 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I0917 00:11:15.444107  153649 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I0917 00:11:15.457810  153649 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  59b20f3a-4b1b-4339-b669-913be553a3bb 939 0 2025-09-17 00:11:15 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-09-17 00:11:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.102.8.104,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.102.8.104],IPFamilies:[IPv4],AllocateLoadBalancerN
odePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W0917 00:11:15.457991  153649 out.go:285] * Launching proxy ...
* Launching proxy ...
I0917 00:11:15.458073  153649 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-456067 proxy --port 36195]
I0917 00:11:15.458381  153649 dashboard.go:157] Waiting for kubectl to output host:port ...
I0917 00:11:15.514294  153649 dashboard.go:175] proxy stdout: Starting to serve on 127.0.0.1:36195
W0917 00:11:15.514357  153649 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I0917 00:11:15.527117  153649 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b5130e04-1e16-44de-aa08-8b9810fee2ee] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Sep 2025 00:11:15 GMT]] Body:0xc001461700 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005a83c0 TLS:<nil>}
I0917 00:11:15.527208  153649 retry.go:31] will retry after 95.71µs: Temporary Error: unexpected response code: 503
I0917 00:11:15.541536  153649 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c79b4c15-93ae-408b-a149-14571806e2a7] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Sep 2025 00:11:15 GMT]] Body:0xc00161c480 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003172c0 TLS:<nil>}
I0917 00:11:15.541603  153649 retry.go:31] will retry after 157.716µs: Temporary Error: unexpected response code: 503
I0917 00:11:15.546430  153649 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[83bc399a-1495-4194-84c3-be221d5d0bac] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Sep 2025 00:11:15 GMT]] Body:0xc001461800 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005a8500 TLS:<nil>}
I0917 00:11:15.546526  153649 retry.go:31] will retry after 144.273µs: Temporary Error: unexpected response code: 503
I0917 00:11:15.552104  153649 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ef723741-b534-417c-91b2-abd0236689ab] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Sep 2025 00:11:15 GMT]] Body:0xc00161c580 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000317400 TLS:<nil>}
I0917 00:11:15.552181  153649 retry.go:31] will retry after 430.949µs: Temporary Error: unexpected response code: 503
I0917 00:11:15.557639  153649 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a27d39ba-3235-41f4-94bb-dc367f22600c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Sep 2025 00:11:15 GMT]] Body:0xc001680200 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005a88c0 TLS:<nil>}
I0917 00:11:15.557717  153649 retry.go:31] will retry after 680.174µs: Temporary Error: unexpected response code: 503
I0917 00:11:15.567194  153649 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2686f444-7298-4383-8b3f-b473740ac0aa] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Sep 2025 00:11:15 GMT]] Body:0xc0014618c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0016e2000 TLS:<nil>}
I0917 00:11:15.567265  153649 retry.go:31] will retry after 1.08186ms: Temporary Error: unexpected response code: 503
I0917 00:11:15.572600  153649 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ea23d22f-5353-42b3-a34d-1a997cc6df72] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Sep 2025 00:11:15 GMT]] Body:0xc001680300 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000317540 TLS:<nil>}
I0917 00:11:15.572657  153649 retry.go:31] will retry after 571.82µs: Temporary Error: unexpected response code: 503
I0917 00:11:15.579175  153649 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[46cccfd3-92ae-4c1f-aa04-7cc3c56affc6] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Sep 2025 00:11:15 GMT]] Body:0xc0016803c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0016e2140 TLS:<nil>}
I0917 00:11:15.579245  153649 retry.go:31] will retry after 1.590426ms: Temporary Error: unexpected response code: 503
I0917 00:11:15.584959  153649 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c6626c54-49b5-4639-b16c-fce5e7f591b3] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Sep 2025 00:11:15 GMT]] Body:0xc00161c6c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0016e2280 TLS:<nil>}
I0917 00:11:15.585034  153649 retry.go:31] will retry after 2.497639ms: Temporary Error: unexpected response code: 503
I0917 00:11:15.593837  153649 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[cfbd1ae3-5315-4797-93cd-1d6b737763f4] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Sep 2025 00:11:15 GMT]] Body:0xc0016804c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005a8a00 TLS:<nil>}
I0917 00:11:15.593918  153649 retry.go:31] will retry after 5.460295ms: Temporary Error: unexpected response code: 503
I0917 00:11:15.603245  153649 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c4ee6984-d0de-432a-b0ce-501013266d2a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Sep 2025 00:11:15 GMT]] Body:0xc00161c7c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0016e23c0 TLS:<nil>}
I0917 00:11:15.603323  153649 retry.go:31] will retry after 7.461676ms: Temporary Error: unexpected response code: 503
I0917 00:11:15.620145  153649 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[18eabbb8-4fd7-4275-a30b-ad2b05c67d57] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Sep 2025 00:11:15 GMT]] Body:0xc0014619c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005a8b40 TLS:<nil>}
I0917 00:11:15.620235  153649 retry.go:31] will retry after 5.535982ms: Temporary Error: unexpected response code: 503
I0917 00:11:15.629869  153649 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[673775ba-90f8-4e40-b1e3-6e607732fddd] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Sep 2025 00:11:15 GMT]] Body:0xc00161c8c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000317680 TLS:<nil>}
I0917 00:11:15.629961  153649 retry.go:31] will retry after 13.786508ms: Temporary Error: unexpected response code: 503
I0917 00:11:15.650662  153649 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b2647e59-eabf-44bd-9833-4dbfa55e5a28] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Sep 2025 00:11:15 GMT]] Body:0xc001680600 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005a8c80 TLS:<nil>}
I0917 00:11:15.650737  153649 retry.go:31] will retry after 17.152561ms: Temporary Error: unexpected response code: 503
I0917 00:11:15.672292  153649 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2eb71a39-788c-48bd-bb05-afad457471d7] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Sep 2025 00:11:15 GMT]] Body:0xc0016806c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0016e2500 TLS:<nil>}
I0917 00:11:15.672380  153649 retry.go:31] will retry after 21.0931ms: Temporary Error: unexpected response code: 503
I0917 00:11:15.699158  153649 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7f95f359-e897-4c5a-b492-349729edfcf8] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Sep 2025 00:11:15 GMT]] Body:0xc001461a80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0016e2640 TLS:<nil>}
I0917 00:11:15.699263  153649 retry.go:31] will retry after 42.684413ms: Temporary Error: unexpected response code: 503
I0917 00:11:15.746994  153649 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4fa87b19-b2e9-4489-bcb0-d89dc278445a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Sep 2025 00:11:15 GMT]] Body:0xc0016807c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003177c0 TLS:<nil>}
I0917 00:11:15.747059  153649 retry.go:31] will retry after 90.111622ms: Temporary Error: unexpected response code: 503
I0917 00:11:15.840591  153649 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c3300bf5-d10c-4dc1-abc5-7cb0f9335d02] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Sep 2025 00:11:15 GMT]] Body:0xc00161ca00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0016e2780 TLS:<nil>}
I0917 00:11:15.840693  153649 retry.go:31] will retry after 49.815733ms: Temporary Error: unexpected response code: 503
I0917 00:11:15.915523  153649 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e3a6fabe-79a1-4f33-96c8-3ff0b9d58898] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Sep 2025 00:11:15 GMT]] Body:0xc001680900 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005a9180 TLS:<nil>}
I0917 00:11:15.915600  153649 retry.go:31] will retry after 195.171451ms: Temporary Error: unexpected response code: 503
I0917 00:11:16.115603  153649 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[51ecec59-728d-4d76-9cfb-d10565178496] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Sep 2025 00:11:16 GMT]] Body:0xc00161cac0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0016e28c0 TLS:<nil>}
I0917 00:11:16.115692  153649 retry.go:31] will retry after 126.37717ms: Temporary Error: unexpected response code: 503
I0917 00:11:16.246161  153649 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0f82acea-3258-4de2-8f28-09951dc87a0c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Sep 2025 00:11:16 GMT]] Body:0xc001680a00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005a97c0 TLS:<nil>}
I0917 00:11:16.246231  153649 retry.go:31] will retry after 363.299057ms: Temporary Error: unexpected response code: 503
I0917 00:11:16.613330  153649 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[bf9acfb5-a8e4-40ea-ab29-72aee936210d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Sep 2025 00:11:16 GMT]] Body:0xc001461bc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0016e2a00 TLS:<nil>}
I0917 00:11:16.613397  153649 retry.go:31] will retry after 268.88793ms: Temporary Error: unexpected response code: 503
I0917 00:11:16.886511  153649 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3aec9671-1b5f-42eb-bfad-dfa64b0f4455] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Sep 2025 00:11:16 GMT]] Body:0xc00161cb80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000317900 TLS:<nil>}
I0917 00:11:16.886601  153649 retry.go:31] will retry after 710.598092ms: Temporary Error: unexpected response code: 503
I0917 00:11:17.601376  153649 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3c4d973b-ce40-451c-961b-98926bd55fa4] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Sep 2025 00:11:17 GMT]] Body:0xc001461cc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005a9900 TLS:<nil>}
I0917 00:11:17.601447  153649 retry.go:31] will retry after 789.540203ms: Temporary Error: unexpected response code: 503
I0917 00:11:18.395217  153649 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[80b5d9a7-afa6-4efa-929e-f2a9166183c5] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Sep 2025 00:11:18 GMT]] Body:0xc001680b00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000317a40 TLS:<nil>}
I0917 00:11:18.395300  153649 retry.go:31] will retry after 1.866202156s: Temporary Error: unexpected response code: 503
I0917 00:11:20.266240  153649 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6f0ff78a-7492-46a1-b211-df10992e79c1] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Sep 2025 00:11:20 GMT]] Body:0xc00161ccc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000317b80 TLS:<nil>}
I0917 00:11:20.266310  153649 retry.go:31] will retry after 3.191382093s: Temporary Error: unexpected response code: 503
I0917 00:11:23.463086  153649 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[61716146-9a27-437f-8608-08f0f72d377d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Sep 2025 00:11:23 GMT]] Body:0xc001680b80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005a9a40 TLS:<nil>}
I0917 00:11:23.463150  153649 retry.go:31] will retry after 1.950955565s: Temporary Error: unexpected response code: 503
I0917 00:11:25.418872  153649 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[96f30e27-213c-4ad9-8fd5-aab65db04f04] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Sep 2025 00:11:25 GMT]] Body:0xc00161cdc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0016e2b40 TLS:<nil>}
I0917 00:11:25.418959  153649 retry.go:31] will retry after 4.132291619s: Temporary Error: unexpected response code: 503
I0917 00:11:29.555119  153649 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[00fc81f7-09c4-497a-be4c-20e7674616b3] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Sep 2025 00:11:29 GMT]] Body:0xc001461e00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0016e2c80 TLS:<nil>}
I0917 00:11:29.555200  153649 retry.go:31] will retry after 12.605803938s: Temporary Error: unexpected response code: 503
I0917 00:11:42.167479  153649 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f1531c65-9636-4592-b4c9-a78a31c0154f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Sep 2025 00:11:42 GMT]] Body:0xc00161ce40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000317cc0 TLS:<nil>}
I0917 00:11:42.167579  153649 retry.go:31] will retry after 12.615407761s: Temporary Error: unexpected response code: 503
I0917 00:11:54.787076  153649 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6d82445d-4ec1-4747-80f0-379a641246ba] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Sep 2025 00:11:54 GMT]] Body:0xc001461f00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005a9b80 TLS:<nil>}
I0917 00:11:54.787142  153649 retry.go:31] will retry after 13.281559004s: Temporary Error: unexpected response code: 503
I0917 00:12:08.074965  153649 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6f2cbc1b-611e-4986-b943-8c4555fca5f7] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Sep 2025 00:12:08 GMT]] Body:0xc0015d0000 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005a9cc0 TLS:<nil>}
I0917 00:12:08.075052  153649 retry.go:31] will retry after 27.133866305s: Temporary Error: unexpected response code: 503
I0917 00:12:35.213009  153649 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e7edd7c0-022c-422b-a05e-4009a2debceb] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Sep 2025 00:12:35 GMT]] Body:0xc0015d0080 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005a9e00 TLS:<nil>}
I0917 00:12:35.213092  153649 retry.go:31] will retry after 32.100623813s: Temporary Error: unexpected response code: 503
I0917 00:13:07.320111  153649 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a584396c-0f8c-4065-aeee-aed6cdaa736b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Sep 2025 00:13:07 GMT]] Body:0xc0015d0100 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0016e2dc0 TLS:<nil>}
I0917 00:13:07.320220  153649 retry.go:31] will retry after 1m5.194982966s: Temporary Error: unexpected response code: 503
I0917 00:14:12.522486  153649 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6f36124f-1df6-480d-9558-61eee553d1d8] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Wed, 17 Sep 2025 00:14:12 GMT]] Body:0xc001680080 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0016e2f00 TLS:<nil>}
I0917 00:14:12.522583  153649 retry.go:31] will retry after 1m11.452705564s: Temporary Error: unexpected response code: 503
I0917 00:15:23.985246  153649 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e4910735-6fd7-4466-8443-6a7c9b88f4f2] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Wed, 17 Sep 2025 00:15:23 GMT]] Body:0xc0008060c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0016e3040 TLS:<nil>}
I0917 00:15:23.985375  153649 retry.go:31] will retry after 51.855997258s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-456067 -n functional-456067
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-456067 logs -n 25: (1.556875283s)
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                ARGS                                                                 │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-456067 ssh findmnt -T /mount-9p | grep 9p                                                                                │ functional-456067 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │                     │
	│ mount          │ -p functional-456067 /tmp/TestFunctionalparallelMountCmdspecific-port3572376631/001:/mount-9p --alsologtostderr -v=1 --port 46464   │ functional-456067 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │                     │
	│ ssh            │ functional-456067 ssh findmnt -T /mount-9p | grep 9p                                                                                │ functional-456067 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │ 17 Sep 25 00:12 UTC │
	│ ssh            │ functional-456067 ssh -- ls -la /mount-9p                                                                                           │ functional-456067 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │ 17 Sep 25 00:12 UTC │
	│ ssh            │ functional-456067 ssh sudo umount -f /mount-9p                                                                                      │ functional-456067 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │                     │
	│ mount          │ -p functional-456067 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3539378027/001:/mount1 --alsologtostderr -v=1                  │ functional-456067 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │                     │
	│ ssh            │ functional-456067 ssh findmnt -T /mount1                                                                                            │ functional-456067 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │                     │
	│ mount          │ -p functional-456067 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3539378027/001:/mount3 --alsologtostderr -v=1                  │ functional-456067 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │                     │
	│ mount          │ -p functional-456067 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3539378027/001:/mount2 --alsologtostderr -v=1                  │ functional-456067 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │                     │
	│ ssh            │ functional-456067 ssh findmnt -T /mount1                                                                                            │ functional-456067 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │ 17 Sep 25 00:12 UTC │
	│ ssh            │ functional-456067 ssh findmnt -T /mount2                                                                                            │ functional-456067 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │ 17 Sep 25 00:12 UTC │
	│ ssh            │ functional-456067 ssh findmnt -T /mount3                                                                                            │ functional-456067 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │ 17 Sep 25 00:12 UTC │
	│ mount          │ -p functional-456067 --kill=true                                                                                                    │ functional-456067 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │                     │
	│ start          │ -p functional-456067 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ functional-456067 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │                     │
	│ start          │ -p functional-456067 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false           │ functional-456067 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │                     │
	│ update-context │ functional-456067 update-context --alsologtostderr -v=2                                                                             │ functional-456067 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │ 17 Sep 25 00:12 UTC │
	│ update-context │ functional-456067 update-context --alsologtostderr -v=2                                                                             │ functional-456067 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │ 17 Sep 25 00:12 UTC │
	│ update-context │ functional-456067 update-context --alsologtostderr -v=2                                                                             │ functional-456067 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │ 17 Sep 25 00:12 UTC │
	│ image          │ functional-456067 image ls --format short --alsologtostderr                                                                         │ functional-456067 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │ 17 Sep 25 00:12 UTC │
	│ image          │ functional-456067 image ls --format yaml --alsologtostderr                                                                          │ functional-456067 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │ 17 Sep 25 00:12 UTC │
	│ ssh            │ functional-456067 ssh pgrep buildkitd                                                                                               │ functional-456067 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │                     │
	│ image          │ functional-456067 image build -t localhost/my-image:functional-456067 testdata/build --alsologtostderr                              │ functional-456067 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │ 17 Sep 25 00:12 UTC │
	│ image          │ functional-456067 image ls                                                                                                          │ functional-456067 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │ 17 Sep 25 00:12 UTC │
	│ image          │ functional-456067 image ls --format json --alsologtostderr                                                                          │ functional-456067 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │ 17 Sep 25 00:12 UTC │
	│ image          │ functional-456067 image ls --format table --alsologtostderr                                                                         │ functional-456067 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │ 17 Sep 25 00:12 UTC │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/17 00:12:14
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 00:12:14.113902  154467 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:12:14.114156  154467 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:12:14.114165  154467 out.go:374] Setting ErrFile to fd 2...
	I0917 00:12:14.114169  154467 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:12:14.114374  154467 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-141589/.minikube/bin
	I0917 00:12:14.114806  154467 out.go:368] Setting JSON to false
	I0917 00:12:14.115682  154467 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":10478,"bootTime":1758057456,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 00:12:14.115776  154467 start.go:140] virtualization: kvm guest
	I0917 00:12:14.118770  154467 out.go:179] * [functional-456067] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0917 00:12:14.120711  154467 notify.go:220] Checking for updates...
	I0917 00:12:14.120772  154467 out.go:179]   - MINIKUBE_LOCATION=21550
	I0917 00:12:14.122569  154467 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 00:12:14.124357  154467 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-141589/kubeconfig
	I0917 00:12:14.125594  154467 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-141589/.minikube
	I0917 00:12:14.130418  154467 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 00:12:14.131745  154467 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 00:12:14.133275  154467 config.go:182] Loaded profile config "functional-456067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:12:14.133700  154467 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 00:12:14.133798  154467 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 00:12:14.147372  154467 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36991
	I0917 00:12:14.147932  154467 main.go:141] libmachine: () Calling .GetVersion
	I0917 00:12:14.148482  154467 main.go:141] libmachine: Using API Version  1
	I0917 00:12:14.148513  154467 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 00:12:14.149023  154467 main.go:141] libmachine: () Calling .GetMachineName
	I0917 00:12:14.149247  154467 main.go:141] libmachine: (functional-456067) Calling .DriverName
	I0917 00:12:14.149533  154467 driver.go:421] Setting default libvirt URI to qemu:///system
	I0917 00:12:14.149963  154467 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 00:12:14.150012  154467 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 00:12:14.164179  154467 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39459
	I0917 00:12:14.164728  154467 main.go:141] libmachine: () Calling .GetVersion
	I0917 00:12:14.165310  154467 main.go:141] libmachine: Using API Version  1
	I0917 00:12:14.165331  154467 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 00:12:14.165660  154467 main.go:141] libmachine: () Calling .GetMachineName
	I0917 00:12:14.165868  154467 main.go:141] libmachine: (functional-456067) Calling .DriverName
	I0917 00:12:14.196787  154467 out.go:179] * Using the kvm2 driver based on existing profile
	I0917 00:12:14.198209  154467 start.go:304] selected driver: kvm2
	I0917 00:12:14.198255  154467 start.go:918] validating driver "kvm2" against &{Name:functional-456067 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.37.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-456067 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.44 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:12:14.198407  154467 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 00:12:14.199402  154467 cni.go:84] Creating CNI manager for ""
	I0917 00:12:14.199480  154467 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 00:12:14.199546  154467 start.go:348] cluster config:
	{Name:functional-456067 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.37.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-456067 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.44 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:12:14.200951  154467 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Sep 17 00:16:14 functional-456067 crio[5815]: time="2025-09-17 00:16:14.827241955Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758068174827217130,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:222508,},InodesUsed:&UInt64Value{Value:110,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=98de2d52-4e9a-4e16-8b3b-11b3fb755b63 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 00:16:14 functional-456067 crio[5815]: time="2025-09-17 00:16:14.827880571Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=964e8765-4815-4320-aa3a-8e8977eb14e7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 00:16:14 functional-456067 crio[5815]: time="2025-09-17 00:16:14.827954064Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=964e8765-4815-4320-aa3a-8e8977eb14e7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 00:16:14 functional-456067 crio[5815]: time="2025-09-17 00:16:14.828319828Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:929f0819cfce05f107cc586915d914ba997c586fcdb094e4bba5c2ea7660752a,PodSandboxId:759cbccd052f2166835e9b1a257ad42cafa9c1605a331a89d3d82b04a5bae582,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1758067927541338404,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6736e2e9-c999-4357-b65c-6e99190f152c,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e3fd165c189428df5aab57509f99623291749b001ed12741f449f2b7882a87c,PodSandboxId:ff755b577a7cee0fc8362c63706f53bb1783133965ff2b4e3be9929bdf14b48b,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1758067864926016845,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-7d85dfc575-b8f7n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 75846762-7135-48fa-b2aa-8d1927545a18,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea72295602e69fc60e984f6c9a6d585562117c81fcab649a8342e4fd679735d3,PodSandboxId:5eb45331e9d76f8fa626d2aba2c0e16d298041a32f582a7e0cfb14b7ba23559d,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1758067863341926715,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-5bb876957f-fk8qm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d5aab6dc-8703-4af7-bd0b-093f75de9f53,},Annotations:map[string]string{io.kubernetes.container.hash: a60d665,io.kubernetes.container.ports: [{\"na
me\":\"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a9f839ba278afe3c20abe1de686c7602b2a3be69877fbb9aa07bfe28a5c2d79,PodSandboxId:70a0d4f863edff493fa02649a0af2fe9b34eebed31eafbb924e56824cb934bcd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1758067826645872220,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pcf69,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cebfaed2-8cab-4dd0-8ca6-089cfabdc70e,},Annotations:map[string]string{io.kubernet
es.container.hash: e2e56a4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:042dc70379de4d844d8f12b9393bba3b57b1e97c822197d216f26e93222192e7,PodSandboxId:d584c17a5066fec8aa5659b1cc0984e5255a92d28a3de956712588061283eee1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1758067826579622903,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2893852c-182d-4f0b-adc7-6cf85183f756,},Annotations:map[string]string{io.kubernetes.container.
hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2e634c3104080a0d045995534deb7ad9dd5b458da256e483fc7c789169af35b,PodSandboxId:d863933b38a2a02050c0571b956e843d5be5c9fed3aee41b1f6949670442f46b,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1758067826604845276,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-z9wt2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 698b133d-2da9-43ed-b8ce-879f34603003,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.con
tainer.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8f3af172c0bd637e14426a9b5fdc5cbbc23deceab06225df5426aeb77e9a8f2,PodSandboxId:458f5d6c912fc37c345f30a78f1b973cbcee768e1b05c1ee58abd815bd241199,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Stat
e:CONTAINER_RUNNING,CreatedAt:1758067822986277578,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-456067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2e4a7d9ac9b8e3e4d16e351fdab7f9d,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2a20360f0aee1a4da4cc6a16e88ac133d2bde5bba76901442ce606920448fe8,PodSandboxId:7f86b9e327927f4d00ca5146249610f4944f164385c58455b205c46b8c67f48c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{}
,UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1758067822789905827,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-456067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 774929cd9e929910c78f51089f6ce784,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c82c77796405ca65fc28e9767c340158a833571d00f641613fd2d58bdd14c622,PodSandboxId:b56e2b3d2c075a31eb4bd7720121312b013752f62d57377ed86410cc9629b545,Metadata:&ContainerMetadata{Name:kube-scheduler,Att
empt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1758067822818600550,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-456067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5c865c4ef5aadbdf17a9a89be8d577f,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68da90bb72167b6caafe3982615ce7f907bfb9f03790445580f0c541594277f9,PodSandboxId:d7eeac
2c9a0849cc4d6dd300bc8551c5a0219a2fe20cfce9fe44bad85e813e18,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1758067822726239204,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-456067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f08a5f4e563d186ce013dd3e014ba54,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:009cc8
f40ca65b9b9d1844c8cec12e3aa639d4fa6159eb2b8c08cdaf9e366923,PodSandboxId:f503e182fbce8ab3c9eb489d5c616758ff269cdbb35bfc55e4ff40183a286937,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1758067785712251954,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pcf69,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cebfaed2-8cab-4dd0-8ca6-089cfabdc70e,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0a85c0699434bfc72e545b837eddb3394f35737b
3a9f90481867a606e119008,PodSandboxId:1e23269daf1c84e92ef48de5aa4f28fb4912cd5a1aa31dd416695ada1f606ad7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1758067785735153587,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2893852c-182d-4f0b-adc7-6cf85183f756,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dac1aca53e9bdc7d9a3710b2ccda65b1e62a52c7cbd842fc18aeeb
26b25c09ec,PodSandboxId:1bfab32a2850088645750281210cfc6b4b54dc75234593f672b57682ea2d33cf,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1758067785707912229,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-z9wt2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 698b133d-2da9-43ed-b8ce-879f34603003,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"rea
diness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f96fd4e7775ff38539b28775afa2831a2e91c348e198c6eff2c89008e7335c0b,PodSandboxId:4a8d05ebc9a83fe10215287312c3c3b0b90ea010413ffe9a8e88ac3e16f6cdcb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1758067782013953312,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-456067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5c865c4ef5aadbdf17a9a89be8d577f,},Annotations:map[string
]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c0cc6452fe55a4fc2cb24900ba6a1bc794c734b614e67a42c3f8228f2947ca3,PodSandboxId:c86c3dbf05534f7f2de19b239bb40870b1d98e0029133a337706a3c510caff82,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1758067781971731001,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-fun
ctional-456067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 774929cd9e929910c78f51089f6ce784,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa50634c06db9a61f344a7426395967b47f5fe13c1ca9ac1b2b48952c05614b5,PodSandboxId:379010ae968c7e55fc6fb81cec62140f4de438ac3704d3384b918ec846207ee0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1758067781910668039,Labels:map[st
ring]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-456067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f08a5f4e563d186ce013dd3e014ba54,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=964e8765-4815-4320-aa3a-8e8977eb14e7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 00:16:14 functional-456067 crio[5815]: time="2025-09-17 00:16:14.875895912Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=89e1a294-bf42-4489-9819-ec9e765d230f name=/runtime.v1.RuntimeService/Version
	Sep 17 00:16:14 functional-456067 crio[5815]: time="2025-09-17 00:16:14.875983472Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=89e1a294-bf42-4489-9819-ec9e765d230f name=/runtime.v1.RuntimeService/Version
	Sep 17 00:16:14 functional-456067 crio[5815]: time="2025-09-17 00:16:14.877251193Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d82f39ac-0099-4929-abf7-df9f3dc93f9c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 00:16:14 functional-456067 crio[5815]: time="2025-09-17 00:16:14.877969178Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758068174877946867,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:222508,},InodesUsed:&UInt64Value{Value:110,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d82f39ac-0099-4929-abf7-df9f3dc93f9c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 00:16:14 functional-456067 crio[5815]: time="2025-09-17 00:16:14.878592892Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9e4d61b2-cb84-4b14-aae5-f1056c530c75 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 00:16:14 functional-456067 crio[5815]: time="2025-09-17 00:16:14.878668191Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9e4d61b2-cb84-4b14-aae5-f1056c530c75 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 00:16:14 functional-456067 crio[5815]: time="2025-09-17 00:16:14.878990360Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:929f0819cfce05f107cc586915d914ba997c586fcdb094e4bba5c2ea7660752a,PodSandboxId:759cbccd052f2166835e9b1a257ad42cafa9c1605a331a89d3d82b04a5bae582,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1758067927541338404,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6736e2e9-c999-4357-b65c-6e99190f152c,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e3fd165c189428df5aab57509f99623291749b001ed12741f449f2b7882a87c,PodSandboxId:ff755b577a7cee0fc8362c63706f53bb1783133965ff2b4e3be9929bdf14b48b,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1758067864926016845,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-7d85dfc575-b8f7n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 75846762-7135-48fa-b2aa-8d1927545a18,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea72295602e69fc60e984f6c9a6d585562117c81fcab649a8342e4fd679735d3,PodSandboxId:5eb45331e9d76f8fa626d2aba2c0e16d298041a32f582a7e0cfb14b7ba23559d,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1758067863341926715,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-5bb876957f-fk8qm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d5aab6dc-8703-4af7-bd0b-093f75de9f53,},Annotations:map[string]string{io.kubernetes.container.hash: a60d665,io.kubernetes.container.ports: [{\"na
me\":\"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a9f839ba278afe3c20abe1de686c7602b2a3be69877fbb9aa07bfe28a5c2d79,PodSandboxId:70a0d4f863edff493fa02649a0af2fe9b34eebed31eafbb924e56824cb934bcd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1758067826645872220,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pcf69,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cebfaed2-8cab-4dd0-8ca6-089cfabdc70e,},Annotations:map[string]string{io.kubernet
es.container.hash: e2e56a4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:042dc70379de4d844d8f12b9393bba3b57b1e97c822197d216f26e93222192e7,PodSandboxId:d584c17a5066fec8aa5659b1cc0984e5255a92d28a3de956712588061283eee1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1758067826579622903,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2893852c-182d-4f0b-adc7-6cf85183f756,},Annotations:map[string]string{io.kubernetes.container.
hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2e634c3104080a0d045995534deb7ad9dd5b458da256e483fc7c789169af35b,PodSandboxId:d863933b38a2a02050c0571b956e843d5be5c9fed3aee41b1f6949670442f46b,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1758067826604845276,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-z9wt2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 698b133d-2da9-43ed-b8ce-879f34603003,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.con
tainer.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8f3af172c0bd637e14426a9b5fdc5cbbc23deceab06225df5426aeb77e9a8f2,PodSandboxId:458f5d6c912fc37c345f30a78f1b973cbcee768e1b05c1ee58abd815bd241199,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Stat
e:CONTAINER_RUNNING,CreatedAt:1758067822986277578,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-456067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2e4a7d9ac9b8e3e4d16e351fdab7f9d,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2a20360f0aee1a4da4cc6a16e88ac133d2bde5bba76901442ce606920448fe8,PodSandboxId:7f86b9e327927f4d00ca5146249610f4944f164385c58455b205c46b8c67f48c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{}
,UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1758067822789905827,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-456067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 774929cd9e929910c78f51089f6ce784,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c82c77796405ca65fc28e9767c340158a833571d00f641613fd2d58bdd14c622,PodSandboxId:b56e2b3d2c075a31eb4bd7720121312b013752f62d57377ed86410cc9629b545,Metadata:&ContainerMetadata{Name:kube-scheduler,Att
empt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1758067822818600550,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-456067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5c865c4ef5aadbdf17a9a89be8d577f,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68da90bb72167b6caafe3982615ce7f907bfb9f03790445580f0c541594277f9,PodSandboxId:d7eeac
2c9a0849cc4d6dd300bc8551c5a0219a2fe20cfce9fe44bad85e813e18,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1758067822726239204,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-456067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f08a5f4e563d186ce013dd3e014ba54,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:009cc8
f40ca65b9b9d1844c8cec12e3aa639d4fa6159eb2b8c08cdaf9e366923,PodSandboxId:f503e182fbce8ab3c9eb489d5c616758ff269cdbb35bfc55e4ff40183a286937,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1758067785712251954,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pcf69,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cebfaed2-8cab-4dd0-8ca6-089cfabdc70e,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0a85c0699434bfc72e545b837eddb3394f35737b
3a9f90481867a606e119008,PodSandboxId:1e23269daf1c84e92ef48de5aa4f28fb4912cd5a1aa31dd416695ada1f606ad7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1758067785735153587,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2893852c-182d-4f0b-adc7-6cf85183f756,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dac1aca53e9bdc7d9a3710b2ccda65b1e62a52c7cbd842fc18aeeb
26b25c09ec,PodSandboxId:1bfab32a2850088645750281210cfc6b4b54dc75234593f672b57682ea2d33cf,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1758067785707912229,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-z9wt2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 698b133d-2da9-43ed-b8ce-879f34603003,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"rea
diness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f96fd4e7775ff38539b28775afa2831a2e91c348e198c6eff2c89008e7335c0b,PodSandboxId:4a8d05ebc9a83fe10215287312c3c3b0b90ea010413ffe9a8e88ac3e16f6cdcb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1758067782013953312,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-456067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5c865c4ef5aadbdf17a9a89be8d577f,},Annotations:map[string
]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c0cc6452fe55a4fc2cb24900ba6a1bc794c734b614e67a42c3f8228f2947ca3,PodSandboxId:c86c3dbf05534f7f2de19b239bb40870b1d98e0029133a337706a3c510caff82,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1758067781971731001,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-fun
ctional-456067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 774929cd9e929910c78f51089f6ce784,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa50634c06db9a61f344a7426395967b47f5fe13c1ca9ac1b2b48952c05614b5,PodSandboxId:379010ae968c7e55fc6fb81cec62140f4de438ac3704d3384b918ec846207ee0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1758067781910668039,Labels:map[st
ring]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-456067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f08a5f4e563d186ce013dd3e014ba54,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9e4d61b2-cb84-4b14-aae5-f1056c530c75 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 00:16:14 functional-456067 crio[5815]: time="2025-09-17 00:16:14.921043048Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=144a36c2-4dfd-4f24-8486-84f53becd29b name=/runtime.v1.RuntimeService/Version
	Sep 17 00:16:14 functional-456067 crio[5815]: time="2025-09-17 00:16:14.921148231Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=144a36c2-4dfd-4f24-8486-84f53becd29b name=/runtime.v1.RuntimeService/Version
	Sep 17 00:16:14 functional-456067 crio[5815]: time="2025-09-17 00:16:14.922705710Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9e802edb-110b-4a2a-9542-9af5005923f8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 00:16:14 functional-456067 crio[5815]: time="2025-09-17 00:16:14.923844381Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758068174923818248,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:222508,},InodesUsed:&UInt64Value{Value:110,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9e802edb-110b-4a2a-9542-9af5005923f8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 00:16:14 functional-456067 crio[5815]: time="2025-09-17 00:16:14.924601116Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6d3540e2-6970-42b1-bfa3-70c6e79e1c54 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 00:16:14 functional-456067 crio[5815]: time="2025-09-17 00:16:14.924675320Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6d3540e2-6970-42b1-bfa3-70c6e79e1c54 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 00:16:14 functional-456067 crio[5815]: time="2025-09-17 00:16:14.924994038Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:929f0819cfce05f107cc586915d914ba997c586fcdb094e4bba5c2ea7660752a,PodSandboxId:759cbccd052f2166835e9b1a257ad42cafa9c1605a331a89d3d82b04a5bae582,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1758067927541338404,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6736e2e9-c999-4357-b65c-6e99190f152c,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e3fd165c189428df5aab57509f99623291749b001ed12741f449f2b7882a87c,PodSandboxId:ff755b577a7cee0fc8362c63706f53bb1783133965ff2b4e3be9929bdf14b48b,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1758067864926016845,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-7d85dfc575-b8f7n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 75846762-7135-48fa-b2aa-8d1927545a18,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea72295602e69fc60e984f6c9a6d585562117c81fcab649a8342e4fd679735d3,PodSandboxId:5eb45331e9d76f8fa626d2aba2c0e16d298041a32f582a7e0cfb14b7ba23559d,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1758067863341926715,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-5bb876957f-fk8qm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d5aab6dc-8703-4af7-bd0b-093f75de9f53,},Annotations:map[string]string{io.kubernetes.container.hash: a60d665,io.kubernetes.container.ports: [{\"na
me\":\"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a9f839ba278afe3c20abe1de686c7602b2a3be69877fbb9aa07bfe28a5c2d79,PodSandboxId:70a0d4f863edff493fa02649a0af2fe9b34eebed31eafbb924e56824cb934bcd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1758067826645872220,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pcf69,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cebfaed2-8cab-4dd0-8ca6-089cfabdc70e,},Annotations:map[string]string{io.kubernet
es.container.hash: e2e56a4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:042dc70379de4d844d8f12b9393bba3b57b1e97c822197d216f26e93222192e7,PodSandboxId:d584c17a5066fec8aa5659b1cc0984e5255a92d28a3de956712588061283eee1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1758067826579622903,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2893852c-182d-4f0b-adc7-6cf85183f756,},Annotations:map[string]string{io.kubernetes.container.
hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2e634c3104080a0d045995534deb7ad9dd5b458da256e483fc7c789169af35b,PodSandboxId:d863933b38a2a02050c0571b956e843d5be5c9fed3aee41b1f6949670442f46b,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1758067826604845276,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-z9wt2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 698b133d-2da9-43ed-b8ce-879f34603003,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.con
tainer.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8f3af172c0bd637e14426a9b5fdc5cbbc23deceab06225df5426aeb77e9a8f2,PodSandboxId:458f5d6c912fc37c345f30a78f1b973cbcee768e1b05c1ee58abd815bd241199,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Stat
e:CONTAINER_RUNNING,CreatedAt:1758067822986277578,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-456067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2e4a7d9ac9b8e3e4d16e351fdab7f9d,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2a20360f0aee1a4da4cc6a16e88ac133d2bde5bba76901442ce606920448fe8,PodSandboxId:7f86b9e327927f4d00ca5146249610f4944f164385c58455b205c46b8c67f48c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{}
,UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1758067822789905827,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-456067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 774929cd9e929910c78f51089f6ce784,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c82c77796405ca65fc28e9767c340158a833571d00f641613fd2d58bdd14c622,PodSandboxId:b56e2b3d2c075a31eb4bd7720121312b013752f62d57377ed86410cc9629b545,Metadata:&ContainerMetadata{Name:kube-scheduler,Att
empt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1758067822818600550,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-456067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5c865c4ef5aadbdf17a9a89be8d577f,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68da90bb72167b6caafe3982615ce7f907bfb9f03790445580f0c541594277f9,PodSandboxId:d7eeac
2c9a0849cc4d6dd300bc8551c5a0219a2fe20cfce9fe44bad85e813e18,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1758067822726239204,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-456067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f08a5f4e563d186ce013dd3e014ba54,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:009cc8
f40ca65b9b9d1844c8cec12e3aa639d4fa6159eb2b8c08cdaf9e366923,PodSandboxId:f503e182fbce8ab3c9eb489d5c616758ff269cdbb35bfc55e4ff40183a286937,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1758067785712251954,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pcf69,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cebfaed2-8cab-4dd0-8ca6-089cfabdc70e,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0a85c0699434bfc72e545b837eddb3394f35737b
3a9f90481867a606e119008,PodSandboxId:1e23269daf1c84e92ef48de5aa4f28fb4912cd5a1aa31dd416695ada1f606ad7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1758067785735153587,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2893852c-182d-4f0b-adc7-6cf85183f756,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dac1aca53e9bdc7d9a3710b2ccda65b1e62a52c7cbd842fc18aeeb
26b25c09ec,PodSandboxId:1bfab32a2850088645750281210cfc6b4b54dc75234593f672b57682ea2d33cf,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1758067785707912229,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-z9wt2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 698b133d-2da9-43ed-b8ce-879f34603003,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"rea
diness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f96fd4e7775ff38539b28775afa2831a2e91c348e198c6eff2c89008e7335c0b,PodSandboxId:4a8d05ebc9a83fe10215287312c3c3b0b90ea010413ffe9a8e88ac3e16f6cdcb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1758067782013953312,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-456067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5c865c4ef5aadbdf17a9a89be8d577f,},Annotations:map[string
]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c0cc6452fe55a4fc2cb24900ba6a1bc794c734b614e67a42c3f8228f2947ca3,PodSandboxId:c86c3dbf05534f7f2de19b239bb40870b1d98e0029133a337706a3c510caff82,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1758067781971731001,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-fun
ctional-456067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 774929cd9e929910c78f51089f6ce784,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa50634c06db9a61f344a7426395967b47f5fe13c1ca9ac1b2b48952c05614b5,PodSandboxId:379010ae968c7e55fc6fb81cec62140f4de438ac3704d3384b918ec846207ee0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1758067781910668039,Labels:map[st
ring]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-456067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f08a5f4e563d186ce013dd3e014ba54,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6d3540e2-6970-42b1-bfa3-70c6e79e1c54 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 00:16:14 functional-456067 crio[5815]: time="2025-09-17 00:16:14.970990792Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7c87271c-a2a8-4df0-8195-fe7eb2f9fc4d name=/runtime.v1.RuntimeService/Version
	Sep 17 00:16:14 functional-456067 crio[5815]: time="2025-09-17 00:16:14.971107475Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7c87271c-a2a8-4df0-8195-fe7eb2f9fc4d name=/runtime.v1.RuntimeService/Version
	Sep 17 00:16:14 functional-456067 crio[5815]: time="2025-09-17 00:16:14.973014676Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f0950398-18de-4449-8f4e-81a947aa701f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 00:16:14 functional-456067 crio[5815]: time="2025-09-17 00:16:14.973768418Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758068174973743379,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:222508,},InodesUsed:&UInt64Value{Value:110,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f0950398-18de-4449-8f4e-81a947aa701f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 00:16:14 functional-456067 crio[5815]: time="2025-09-17 00:16:14.974710779Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fd051c71-56e6-4292-a162-66bded229f29 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 00:16:14 functional-456067 crio[5815]: time="2025-09-17 00:16:14.974767872Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fd051c71-56e6-4292-a162-66bded229f29 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 00:16:14 functional-456067 crio[5815]: time="2025-09-17 00:16:14.975119513Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:929f0819cfce05f107cc586915d914ba997c586fcdb094e4bba5c2ea7660752a,PodSandboxId:759cbccd052f2166835e9b1a257ad42cafa9c1605a331a89d3d82b04a5bae582,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1758067927541338404,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6736e2e9-c999-4357-b65c-6e99190f152c,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e3fd165c189428df5aab57509f99623291749b001ed12741f449f2b7882a87c,PodSandboxId:ff755b577a7cee0fc8362c63706f53bb1783133965ff2b4e3be9929bdf14b48b,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1758067864926016845,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-7d85dfc575-b8f7n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 75846762-7135-48fa-b2aa-8d1927545a18,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea72295602e69fc60e984f6c9a6d585562117c81fcab649a8342e4fd679735d3,PodSandboxId:5eb45331e9d76f8fa626d2aba2c0e16d298041a32f582a7e0cfb14b7ba23559d,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1758067863341926715,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-5bb876957f-fk8qm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d5aab6dc-8703-4af7-bd0b-093f75de9f53,},Annotations:map[string]string{io.kubernetes.container.hash: a60d665,io.kubernetes.container.ports: [{\"na
me\":\"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a9f839ba278afe3c20abe1de686c7602b2a3be69877fbb9aa07bfe28a5c2d79,PodSandboxId:70a0d4f863edff493fa02649a0af2fe9b34eebed31eafbb924e56824cb934bcd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1758067826645872220,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pcf69,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cebfaed2-8cab-4dd0-8ca6-089cfabdc70e,},Annotations:map[string]string{io.kubernet
es.container.hash: e2e56a4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:042dc70379de4d844d8f12b9393bba3b57b1e97c822197d216f26e93222192e7,PodSandboxId:d584c17a5066fec8aa5659b1cc0984e5255a92d28a3de956712588061283eee1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1758067826579622903,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2893852c-182d-4f0b-adc7-6cf85183f756,},Annotations:map[string]string{io.kubernetes.container.
hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2e634c3104080a0d045995534deb7ad9dd5b458da256e483fc7c789169af35b,PodSandboxId:d863933b38a2a02050c0571b956e843d5be5c9fed3aee41b1f6949670442f46b,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1758067826604845276,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-z9wt2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 698b133d-2da9-43ed-b8ce-879f34603003,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.con
tainer.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8f3af172c0bd637e14426a9b5fdc5cbbc23deceab06225df5426aeb77e9a8f2,PodSandboxId:458f5d6c912fc37c345f30a78f1b973cbcee768e1b05c1ee58abd815bd241199,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Stat
e:CONTAINER_RUNNING,CreatedAt:1758067822986277578,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-456067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2e4a7d9ac9b8e3e4d16e351fdab7f9d,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2a20360f0aee1a4da4cc6a16e88ac133d2bde5bba76901442ce606920448fe8,PodSandboxId:7f86b9e327927f4d00ca5146249610f4944f164385c58455b205c46b8c67f48c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{}
,UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1758067822789905827,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-456067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 774929cd9e929910c78f51089f6ce784,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c82c77796405ca65fc28e9767c340158a833571d00f641613fd2d58bdd14c622,PodSandboxId:b56e2b3d2c075a31eb4bd7720121312b013752f62d57377ed86410cc9629b545,Metadata:&ContainerMetadata{Name:kube-scheduler,Att
empt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1758067822818600550,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-456067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5c865c4ef5aadbdf17a9a89be8d577f,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68da90bb72167b6caafe3982615ce7f907bfb9f03790445580f0c541594277f9,PodSandboxId:d7eeac
2c9a0849cc4d6dd300bc8551c5a0219a2fe20cfce9fe44bad85e813e18,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1758067822726239204,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-456067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f08a5f4e563d186ce013dd3e014ba54,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:009cc8
f40ca65b9b9d1844c8cec12e3aa639d4fa6159eb2b8c08cdaf9e366923,PodSandboxId:f503e182fbce8ab3c9eb489d5c616758ff269cdbb35bfc55e4ff40183a286937,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1758067785712251954,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pcf69,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cebfaed2-8cab-4dd0-8ca6-089cfabdc70e,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0a85c0699434bfc72e545b837eddb3394f35737b
3a9f90481867a606e119008,PodSandboxId:1e23269daf1c84e92ef48de5aa4f28fb4912cd5a1aa31dd416695ada1f606ad7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1758067785735153587,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2893852c-182d-4f0b-adc7-6cf85183f756,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dac1aca53e9bdc7d9a3710b2ccda65b1e62a52c7cbd842fc18aeeb
26b25c09ec,PodSandboxId:1bfab32a2850088645750281210cfc6b4b54dc75234593f672b57682ea2d33cf,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1758067785707912229,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-z9wt2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 698b133d-2da9-43ed-b8ce-879f34603003,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"rea
diness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f96fd4e7775ff38539b28775afa2831a2e91c348e198c6eff2c89008e7335c0b,PodSandboxId:4a8d05ebc9a83fe10215287312c3c3b0b90ea010413ffe9a8e88ac3e16f6cdcb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1758067782013953312,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-456067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5c865c4ef5aadbdf17a9a89be8d577f,},Annotations:map[string
]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c0cc6452fe55a4fc2cb24900ba6a1bc794c734b614e67a42c3f8228f2947ca3,PodSandboxId:c86c3dbf05534f7f2de19b239bb40870b1d98e0029133a337706a3c510caff82,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1758067781971731001,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-fun
ctional-456067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 774929cd9e929910c78f51089f6ce784,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa50634c06db9a61f344a7426395967b47f5fe13c1ca9ac1b2b48952c05614b5,PodSandboxId:379010ae968c7e55fc6fb81cec62140f4de438ac3704d3384b918ec846207ee0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1758067781910668039,Labels:map[st
ring]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-456067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f08a5f4e563d186ce013dd3e014ba54,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fd051c71-56e6-4292-a162-66bded229f29 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	929f0819cfce0       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e     4 minutes ago       Exited              mount-munger              0                   759cbccd052f2       busybox-mount
	6e3fd165c1894       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6   5 minutes ago       Running             echo-server               0                   ff755b577a7ce       hello-node-connect-7d85dfc575-b8f7n
	ea72295602e69       docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb         5 minutes ago       Running             mysql                     0                   5eb45331e9d76       mysql-5bb876957f-fk8qm
	4a9f839ba278a       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                        5 minutes ago       Running             kube-proxy                3                   70a0d4f863edf       kube-proxy-pcf69
	c2e634c310408       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                        5 minutes ago       Running             coredns                   3                   d863933b38a2a       coredns-66bc5c9577-z9wt2
	042dc70379de4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                        5 minutes ago       Running             storage-provisioner       4                   d584c17a5066f       storage-provisioner
	c8f3af172c0bd       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                        5 minutes ago       Running             kube-apiserver            0                   458f5d6c912fc       kube-apiserver-functional-456067
	c82c77796405c       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                        5 minutes ago       Running             kube-scheduler            3                   b56e2b3d2c075       kube-scheduler-functional-456067
	c2a20360f0aee       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                        5 minutes ago       Running             kube-controller-manager   3                   7f86b9e327927       kube-controller-manager-functional-456067
	68da90bb72167       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                        5 minutes ago       Running             etcd                      3                   d7eeac2c9a084       etcd-functional-456067
	d0a85c0699434       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                        6 minutes ago       Exited              storage-provisioner       3                   1e23269daf1c8       storage-provisioner
	009cc8f40ca65       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                        6 minutes ago       Exited              kube-proxy                2                   f503e182fbce8       kube-proxy-pcf69
	dac1aca53e9bd       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                        6 minutes ago       Exited              coredns                   2                   1bfab32a28500       coredns-66bc5c9577-z9wt2
	f96fd4e7775ff       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                        6 minutes ago       Exited              kube-scheduler            2                   4a8d05ebc9a83       kube-scheduler-functional-456067
	4c0cc6452fe55       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                        6 minutes ago       Exited              kube-controller-manager   2                   c86c3dbf05534       kube-controller-manager-functional-456067
	aa50634c06db9       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                        6 minutes ago       Exited              etcd                      2                   379010ae968c7       etcd-functional-456067
	
	
	==> coredns [c2e634c3104080a0d045995534deb7ad9dd5b458da256e483fc7c789169af35b] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51773 - 29704 "HINFO IN 6836174930101226771.3542159233259123785. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.462094087s
	
	
	==> coredns [dac1aca53e9bdc7d9a3710b2ccda65b1e62a52c7cbd842fc18aeeb26b25c09ec] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60371 - 54073 "HINFO IN 8748564013944487444.4424170425081406606. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.117396738s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-456067
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-456067
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=functional-456067
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_17T00_08_24_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Sep 2025 00:08:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-456067
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:16:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:12:27 +0000   Wed, 17 Sep 2025 00:08:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:12:27 +0000   Wed, 17 Sep 2025 00:08:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:12:27 +0000   Wed, 17 Sep 2025 00:08:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:12:27 +0000   Wed, 17 Sep 2025 00:08:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.44
	  Hostname:    functional-456067
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008596Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008596Ki
	  pods:               110
	System Info:
	  Machine ID:                 58a0f112b18c4204bc54cae70ae412b4
	  System UUID:                58a0f112-b18c-4204-bc54-cae70ae412b4
	  Boot ID:                    ad612535-5cba-4164-9f8c-d12fdd7b5bac
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-fkpgc                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m10s
	  default                     hello-node-connect-7d85dfc575-b8f7n           0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m22s
	  default                     mysql-5bb876957f-fk8qm                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    5m25s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m15s
	  kube-system                 coredns-66bc5c9577-z9wt2                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     7m46s
	  kube-system                 etcd-functional-456067                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         7m52s
	  kube-system                 kube-apiserver-functional-456067              250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m49s
	  kube-system                 kube-controller-manager-functional-456067     200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m52s
	  kube-system                 kube-proxy-pcf69                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m46s
	  kube-system                 kube-scheduler-functional-456067              100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m53s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m44s
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-sgrhm    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-vxn8t         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m44s                  kube-proxy       
	  Normal  Starting                 5m48s                  kube-proxy       
	  Normal  Starting                 6m29s                  kube-proxy       
	  Normal  Starting                 6m53s                  kube-proxy       
	  Normal  Starting                 8m                     kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m                     kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    7m59s (x8 over 8m)     kubelet          Node functional-456067 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m59s (x7 over 8m)     kubelet          Node functional-456067 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  7m59s (x8 over 8m)     kubelet          Node functional-456067 status is now: NodeHasSufficientMemory
	  Normal  Starting                 7m52s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m52s                  kubelet          Node functional-456067 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m52s                  kubelet          Node functional-456067 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m52s                  kubelet          Node functional-456067 status is now: NodeHasSufficientPID
	  Normal  NodeReady                7m51s                  kubelet          Node functional-456067 status is now: NodeReady
	  Normal  RegisteredNode           7m47s                  node-controller  Node functional-456067 event: Registered Node functional-456067 in Controller
	  Normal  RegisteredNode           6m51s                  node-controller  Node functional-456067 event: Registered Node functional-456067 in Controller
	  Normal  NodeAllocatableEnforced  6m34s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m34s (x8 over 6m34s)  kubelet          Node functional-456067 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m34s (x8 over 6m34s)  kubelet          Node functional-456067 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m34s (x7 over 6m34s)  kubelet          Node functional-456067 status is now: NodeHasSufficientPID
	  Normal  Starting                 6m34s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           6m27s                  node-controller  Node functional-456067 event: Registered Node functional-456067 in Controller
	  Normal  Starting                 5m53s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m53s (x8 over 5m53s)  kubelet          Node functional-456067 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m53s (x8 over 5m53s)  kubelet          Node functional-456067 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m53s (x7 over 5m53s)  kubelet          Node functional-456067 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m53s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m46s                  node-controller  Node functional-456067 event: Registered Node functional-456067 in Controller
	
	
	==> dmesg <==
	[  +0.002590] (rpcbind)[120]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[Sep17 00:08] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000049] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.090099] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.102682] kauditd_printk_skb: 130 callbacks suppressed
	[  +0.157646] kauditd_printk_skb: 171 callbacks suppressed
	[  +6.339647] kauditd_printk_skb: 18 callbacks suppressed
	[ +10.675252] kauditd_printk_skb: 252 callbacks suppressed
	[Sep17 00:09] kauditd_printk_skb: 38 callbacks suppressed
	[  +4.638331] kauditd_printk_skb: 328 callbacks suppressed
	[  +3.486901] kauditd_printk_skb: 3 callbacks suppressed
	[  +0.133926] kauditd_printk_skb: 31 callbacks suppressed
	[  +5.144337] kauditd_printk_skb: 126 callbacks suppressed
	[  +8.158591] kauditd_printk_skb: 2 callbacks suppressed
	[Sep17 00:10] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.772756] kauditd_printk_skb: 273 callbacks suppressed
	[  +1.774509] kauditd_printk_skb: 119 callbacks suppressed
	[ +14.656307] kauditd_printk_skb: 2 callbacks suppressed
	[  +1.993182] kauditd_printk_skb: 97 callbacks suppressed
	[Sep17 00:11] kauditd_printk_skb: 80 callbacks suppressed
	[  +0.000181] kauditd_printk_skb: 110 callbacks suppressed
	[Sep17 00:12] kauditd_printk_skb: 110 callbacks suppressed
	[  +5.078844] crun[9816]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	[  +2.661971] kauditd_printk_skb: 31 callbacks suppressed
	
	
	==> etcd [68da90bb72167b6caafe3982615ce7f907bfb9f03790445580f0c541594277f9] <==
	{"level":"info","ts":"2025-09-17T00:11:00.695445Z","caller":"traceutil/trace.go:172","msg":"trace[1510088569] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:832; }","duration":"286.07441ms","start":"2025-09-17T00:11:00.409365Z","end":"2025-09-17T00:11:00.695439Z","steps":["trace[1510088569] 'agreement among raft nodes before linearized reading'  (duration: 286.014137ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-17T00:11:00.695863Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"239.128174ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1117"}
	{"level":"info","ts":"2025-09-17T00:11:00.695911Z","caller":"traceutil/trace.go:172","msg":"trace[1748402899] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:833; }","duration":"239.183754ms","start":"2025-09-17T00:11:00.456721Z","end":"2025-09-17T00:11:00.695905Z","steps":["trace[1748402899] 'agreement among raft nodes before linearized reading'  (duration: 239.073747ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-17T00:11:00.696031Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"138.421068ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-17T00:11:00.696045Z","caller":"traceutil/trace.go:172","msg":"trace[1693471884] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:833; }","duration":"138.438468ms","start":"2025-09-17T00:11:00.557602Z","end":"2025-09-17T00:11:00.696041Z","steps":["trace[1693471884] 'agreement among raft nodes before linearized reading'  (duration: 138.408273ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-17T00:11:00.696241Z","caller":"traceutil/trace.go:172","msg":"trace[446402970] transaction","detail":"{read_only:false; response_revision:833; number_of_response:1; }","duration":"313.533847ms","start":"2025-09-17T00:11:00.382455Z","end":"2025-09-17T00:11:00.695989Z","steps":["trace[446402970] 'process raft request'  (duration: 313.255426ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-17T00:11:00.697512Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-17T00:11:00.382435Z","time spent":"313.835605ms","remote":"127.0.0.1:55588","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1934,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/default/sp-pod\" mod_revision:0 > success:<request_put:<key:\"/registry/pods/default/sp-pod\" value_size:1897 >> failure:<>"}
	{"level":"info","ts":"2025-09-17T00:11:02.791241Z","caller":"traceutil/trace.go:172","msg":"trace[1349327765] linearizableReadLoop","detail":"{readStateIndex:917; appliedIndex:917; }","duration":"233.544576ms","start":"2025-09-17T00:11:02.557682Z","end":"2025-09-17T00:11:02.791227Z","steps":["trace[1349327765] 'read index received'  (duration: 233.539322ms)","trace[1349327765] 'applied index is now lower than readState.Index'  (duration: 4.591µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-17T00:11:02.791363Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"233.687839ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-17T00:11:02.791383Z","caller":"traceutil/trace.go:172","msg":"trace[1556255295] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:837; }","duration":"233.724452ms","start":"2025-09-17T00:11:02.557652Z","end":"2025-09-17T00:11:02.791377Z","steps":["trace[1556255295] 'agreement among raft nodes before linearized reading'  (duration: 233.667633ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-17T00:11:02.977723Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"185.991027ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6201343802298446009 > lease_revoke:<id:560f99550211afd9>","response":"size:28"}
	{"level":"info","ts":"2025-09-17T00:11:02.977803Z","caller":"traceutil/trace.go:172","msg":"trace[1181892876] linearizableReadLoop","detail":"{readStateIndex:918; appliedIndex:917; }","duration":"186.498723ms","start":"2025-09-17T00:11:02.791295Z","end":"2025-09-17T00:11:02.977794Z","steps":["trace[1181892876] 'read index received'  (duration: 14.053µs)","trace[1181892876] 'applied index is now lower than readState.Index'  (duration: 186.483999ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-17T00:11:02.977855Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"263.342306ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-17T00:11:02.977867Z","caller":"traceutil/trace.go:172","msg":"trace[664484668] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:837; }","duration":"263.362237ms","start":"2025-09-17T00:11:02.714500Z","end":"2025-09-17T00:11:02.977862Z","steps":["trace[664484668] 'agreement among raft nodes before linearized reading'  (duration: 263.320779ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-17T00:11:02.978172Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"241.493127ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1117"}
	{"level":"info","ts":"2025-09-17T00:11:02.978212Z","caller":"traceutil/trace.go:172","msg":"trace[1958673148] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:837; }","duration":"241.53884ms","start":"2025-09-17T00:11:02.736667Z","end":"2025-09-17T00:11:02.978206Z","steps":["trace[1958673148] 'agreement among raft nodes before linearized reading'  (duration: 241.419533ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-17T00:11:09.383251Z","caller":"traceutil/trace.go:172","msg":"trace[1850063118] linearizableReadLoop","detail":"{readStateIndex:961; appliedIndex:961; }","duration":"110.602231ms","start":"2025-09-17T00:11:09.272632Z","end":"2025-09-17T00:11:09.383234Z","steps":["trace[1850063118] 'read index received'  (duration: 110.597648ms)","trace[1850063118] 'applied index is now lower than readState.Index'  (duration: 3.919µs)"],"step_count":2}
	{"level":"info","ts":"2025-09-17T00:11:09.383406Z","caller":"traceutil/trace.go:172","msg":"trace[793459114] transaction","detail":"{read_only:false; response_revision:880; number_of_response:1; }","duration":"341.178896ms","start":"2025-09-17T00:11:09.042217Z","end":"2025-09-17T00:11:09.383396Z","steps":["trace[793459114] 'process raft request'  (duration: 341.07646ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-17T00:11:09.383993Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-17T00:11:09.042199Z","time spent":"341.593927ms","remote":"127.0.0.1:55552","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:877 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2025-09-17T00:11:09.385228Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"111.904897ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-17T00:11:09.385352Z","caller":"traceutil/trace.go:172","msg":"trace[863660152] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:880; }","duration":"112.716727ms","start":"2025-09-17T00:11:09.272627Z","end":"2025-09-17T00:11:09.385343Z","steps":["trace[863660152] 'agreement among raft nodes before linearized reading'  (duration: 110.909223ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-17T00:11:11.692694Z","caller":"traceutil/trace.go:172","msg":"trace[1075119690] transaction","detail":"{read_only:false; response_revision:881; number_of_response:1; }","duration":"279.973043ms","start":"2025-09-17T00:11:11.412708Z","end":"2025-09-17T00:11:11.692682Z","steps":["trace[1075119690] 'process raft request'  (duration: 279.88091ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-17T00:11:11.692929Z","caller":"traceutil/trace.go:172","msg":"trace[612386911] linearizableReadLoop","detail":"{readStateIndex:962; appliedIndex:963; }","duration":"135.153849ms","start":"2025-09-17T00:11:11.557768Z","end":"2025-09-17T00:11:11.692922Z","steps":["trace[612386911] 'read index received'  (duration: 135.151573ms)","trace[612386911] 'applied index is now lower than readState.Index'  (duration: 1.858µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-17T00:11:11.693006Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"135.2468ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-17T00:11:11.693052Z","caller":"traceutil/trace.go:172","msg":"trace[1381131886] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:881; }","duration":"135.295528ms","start":"2025-09-17T00:11:11.557743Z","end":"2025-09-17T00:11:11.693039Z","steps":["trace[1381131886] 'agreement among raft nodes before linearized reading'  (duration: 135.226863ms)"],"step_count":1}
	
	
	==> etcd [aa50634c06db9a61f344a7426395967b47f5fe13c1ca9ac1b2b48952c05614b5] <==
	{"level":"warn","ts":"2025-09-17T00:09:44.037616Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:09:44.045030Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:09:44.078425Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:09:44.098759Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:09:44.121749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:09:44.139035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:09:44.190735Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50952","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-17T00:10:11.305133Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-17T00:10:11.305264Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-456067","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.44:2380"],"advertise-client-urls":["https://192.168.50.44:2379"]}
	{"level":"error","ts":"2025-09-17T00:10:11.305385Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-17T00:10:11.402959Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-17T00:10:11.403064Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-17T00:10:11.403088Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"bc0cc19ab3a6560f","current-leader-member-id":"bc0cc19ab3a6560f"}
	{"level":"info","ts":"2025-09-17T00:10:11.403184Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-09-17T00:10:11.403193Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-09-17T00:10:11.403331Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.44:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-17T00:10:11.403431Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.44:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-17T00:10:11.403438Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.50.44:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-09-17T00:10:11.403481Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-17T00:10:11.403487Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-17T00:10:11.403492Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-17T00:10:11.407290Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.50.44:2380"}
	{"level":"error","ts":"2025-09-17T00:10:11.407342Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.50.44:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-17T00:10:11.407365Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.50.44:2380"}
	{"level":"info","ts":"2025-09-17T00:10:11.407371Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-456067","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.44:2380"],"advertise-client-urls":["https://192.168.50.44:2379"]}
	
	
	==> kernel <==
	 00:16:15 up 8 min,  0 users,  load average: 0.30, 0.45, 0.30
	Linux functional-456067 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Sep  9 02:24:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [c8f3af172c0bd637e14426a9b5fdc5cbbc23deceab06225df5426aeb77e9a8f2] <==
	I0917 00:10:27.650412       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0917 00:10:27.681508       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0917 00:10:27.690885       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0917 00:10:29.356191       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0917 00:10:29.605142       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0917 00:10:29.704520       1 controller.go:667] quota admission added evaluator for: endpoints
	I0917 00:10:45.271376       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.101.125.7"}
	I0917 00:10:50.258242       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.111.110.172"}
	I0917 00:10:53.259901       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.97.160.219"}
	I0917 00:11:05.999938       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.111.240.195"}
	E0917 00:11:09.562004       1 conn.go:339] Error on socket receive: read tcp 192.168.50.44:8441->192.168.50.1:38920: use of closed network connection
	E0917 00:11:10.287837       1 conn.go:339] Error on socket receive: read tcp 192.168.50.44:8441->192.168.50.1:60234: use of closed network connection
	E0917 00:11:12.042476       1 conn.go:339] Error on socket receive: read tcp 192.168.50.44:8441->192.168.50.1:60246: use of closed network connection
	E0917 00:11:13.948158       1 conn.go:339] Error on socket receive: read tcp 192.168.50.44:8441->192.168.50.1:60302: use of closed network connection
	I0917 00:11:14.993124       1 controller.go:667] quota admission added evaluator for: namespaces
	I0917 00:11:15.382466       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.8.104"}
	I0917 00:11:15.412979       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.1.170"}
	I0917 00:11:27.244153       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:11:51.085595       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:12:41.435242       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:13:09.256735       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:14:06.495621       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:14:19.155645       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:15:10.478859       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:15:30.772670       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [4c0cc6452fe55a4fc2cb24900ba6a1bc794c734b614e67a42c3f8228f2947ca3] <==
	I0917 00:09:48.287652       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0917 00:09:48.287791       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0917 00:09:48.289497       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0917 00:09:48.289625       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0917 00:09:48.297255       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0917 00:09:48.297358       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0917 00:09:48.299593       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0917 00:09:48.301454       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0917 00:09:48.301522       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0917 00:09:48.301657       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0917 00:09:48.304284       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0917 00:09:48.305418       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0917 00:09:48.307627       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0917 00:09:48.309973       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0917 00:09:48.311112       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0917 00:09:48.311131       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0917 00:09:48.314353       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I0917 00:09:48.317664       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0917 00:09:48.321909       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0917 00:09:48.324505       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0917 00:09:48.336441       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0917 00:09:48.337777       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0917 00:09:48.339021       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0917 00:09:48.340404       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0917 00:09:48.348324       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	
	
	==> kube-controller-manager [c2a20360f0aee1a4da4cc6a16e88ac133d2bde5bba76901442ce606920448fe8] <==
	I0917 00:10:29.315849       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0917 00:10:29.317910       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0917 00:10:29.319068       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0917 00:10:29.321862       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0917 00:10:29.332192       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0917 00:10:29.333378       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0917 00:10:29.338749       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0917 00:10:29.345208       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0917 00:10:29.345270       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0917 00:10:29.345289       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0917 00:10:29.351142       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0917 00:10:29.351150       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0917 00:10:29.353671       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0917 00:10:29.353801       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0917 00:10:29.353891       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-456067"
	I0917 00:10:29.354141       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0917 00:10:29.354199       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	E0917 00:11:15.122641       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0917 00:11:15.124374       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0917 00:11:15.139263       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0917 00:11:15.145292       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0917 00:11:15.152494       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0917 00:11:15.154701       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0917 00:11:15.165120       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0917 00:11:15.173324       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [009cc8f40ca65b9b9d1844c8cec12e3aa639d4fa6159eb2b8c08cdaf9e366923] <==
	I0917 00:09:46.079350       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0917 00:09:46.179494       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0917 00:09:46.179577       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.50.44"]
	E0917 00:09:46.179643       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 00:09:46.238380       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0917 00:09:46.238530       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0917 00:09:46.238662       1 server_linux.go:132] "Using iptables Proxier"
	I0917 00:09:46.253782       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 00:09:46.254800       1 server.go:527] "Version info" version="v1.34.0"
	I0917 00:09:46.254835       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 00:09:46.260073       1 config.go:200] "Starting service config controller"
	I0917 00:09:46.260146       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0917 00:09:46.260174       1 config.go:106] "Starting endpoint slice config controller"
	I0917 00:09:46.260193       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0917 00:09:46.260234       1 config.go:403] "Starting serviceCIDR config controller"
	I0917 00:09:46.260252       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0917 00:09:46.262321       1 config.go:309] "Starting node config controller"
	I0917 00:09:46.263485       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0917 00:09:46.265394       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0917 00:09:46.360370       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0917 00:09:46.360415       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0917 00:09:46.360434       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [4a9f839ba278afe3c20abe1de686c7602b2a3be69877fbb9aa07bfe28a5c2d79] <==
	I0917 00:10:27.103339       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0917 00:10:27.204343       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0917 00:10:27.204394       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.50.44"]
	E0917 00:10:27.204458       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 00:10:27.245359       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0917 00:10:27.245460       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0917 00:10:27.245646       1 server_linux.go:132] "Using iptables Proxier"
	I0917 00:10:27.256366       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 00:10:27.256760       1 server.go:527] "Version info" version="v1.34.0"
	I0917 00:10:27.256794       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 00:10:27.261975       1 config.go:309] "Starting node config controller"
	I0917 00:10:27.262007       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0917 00:10:27.262013       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0917 00:10:27.262158       1 config.go:403] "Starting serviceCIDR config controller"
	I0917 00:10:27.262184       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0917 00:10:27.262279       1 config.go:200] "Starting service config controller"
	I0917 00:10:27.262283       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0917 00:10:27.262307       1 config.go:106] "Starting endpoint slice config controller"
	I0917 00:10:27.262329       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0917 00:10:27.362659       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0917 00:10:27.362738       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0917 00:10:27.362749       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [c82c77796405ca65fc28e9767c340158a833571d00f641613fd2d58bdd14c622] <==
	I0917 00:10:23.915888       1 serving.go:386] Generated self-signed cert in-memory
	I0917 00:10:26.171795       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0917 00:10:26.171898       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 00:10:26.178503       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0917 00:10:26.178652       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I0917 00:10:26.178716       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 00:10:26.178740       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 00:10:26.178762       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0917 00:10:26.178779       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0917 00:10:26.179239       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0917 00:10:26.179446       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0917 00:10:26.279024       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0917 00:10:26.279094       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I0917 00:10:26.279194       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [f96fd4e7775ff38539b28775afa2831a2e91c348e198c6eff2c89008e7335c0b] <==
	I0917 00:09:43.237162       1 serving.go:386] Generated self-signed cert in-memory
	I0917 00:09:45.029707       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0917 00:09:45.029746       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 00:09:45.035445       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0917 00:09:45.035524       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I0917 00:09:45.035598       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 00:09:45.035605       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 00:09:45.035616       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0917 00:09:45.035622       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0917 00:09:45.035833       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0917 00:09:45.035904       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0917 00:09:45.135740       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0917 00:09:45.135844       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I0917 00:09:45.135926       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 00:10:11.325428       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0917 00:10:11.325486       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0917 00:10:11.325516       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0917 00:10:11.341764       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 00:10:11.342692       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I0917 00:10:11.342725       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0917 00:10:11.342952       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0917 00:10:11.346457       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 17 00:15:22 functional-456067 kubelet[6532]: E0917 00:15:22.393517    6532 manager.go:1116] Failed to create existing container: /kubepods/burstable/pode5c865c4ef5aadbdf17a9a89be8d577f/crio-4a8d05ebc9a83fe10215287312c3c3b0b90ea010413ffe9a8e88ac3e16f6cdcb: Error finding container 4a8d05ebc9a83fe10215287312c3c3b0b90ea010413ffe9a8e88ac3e16f6cdcb: Status 404 returned error can't find the container with id 4a8d05ebc9a83fe10215287312c3c3b0b90ea010413ffe9a8e88ac3e16f6cdcb
	Sep 17 00:15:22 functional-456067 kubelet[6532]: E0917 00:15:22.393910    6532 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod774929cd9e929910c78f51089f6ce784/crio-c86c3dbf05534f7f2de19b239bb40870b1d98e0029133a337706a3c510caff82: Error finding container c86c3dbf05534f7f2de19b239bb40870b1d98e0029133a337706a3c510caff82: Status 404 returned error can't find the container with id c86c3dbf05534f7f2de19b239bb40870b1d98e0029133a337706a3c510caff82
	Sep 17 00:15:22 functional-456067 kubelet[6532]: E0917 00:15:22.579020    6532 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758068122578506152  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:222508}  inodes_used:{value:110}}"
	Sep 17 00:15:22 functional-456067 kubelet[6532]: E0917 00:15:22.579067    6532 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758068122578506152  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:222508}  inodes_used:{value:110}}"
	Sep 17 00:15:25 functional-456067 kubelet[6532]: E0917 00:15:25.305839    6532 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-vxn8t" podUID="17c144bf-8d27-4c0d-abf2-161c7f5fcc7e"
	Sep 17 00:15:32 functional-456067 kubelet[6532]: E0917 00:15:32.581612    6532 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758068132581010806  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:222508}  inodes_used:{value:110}}"
	Sep 17 00:15:32 functional-456067 kubelet[6532]: E0917 00:15:32.581639    6532 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758068132581010806  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:222508}  inodes_used:{value:110}}"
	Sep 17 00:15:42 functional-456067 kubelet[6532]: E0917 00:15:42.232145    6532 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Sep 17 00:15:42 functional-456067 kubelet[6532]: E0917 00:15:42.232207    6532 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Sep 17 00:15:42 functional-456067 kubelet[6532]: E0917 00:15:42.232365    6532 kuberuntime_manager.go:1449] "Unhandled Error" err="container myfrontend start failed in pod sp-pod_default(bf17def4-4e0f-4ae8-a19c-3925f04e81e4): ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 17 00:15:42 functional-456067 kubelet[6532]: E0917 00:15:42.232394    6532 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="bf17def4-4e0f-4ae8-a19c-3925f04e81e4"
	Sep 17 00:15:42 functional-456067 kubelet[6532]: E0917 00:15:42.584604    6532 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758068142584194557  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:222508}  inodes_used:{value:110}}"
	Sep 17 00:15:42 functional-456067 kubelet[6532]: E0917 00:15:42.584646    6532 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758068142584194557  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:222508}  inodes_used:{value:110}}"
	Sep 17 00:15:52 functional-456067 kubelet[6532]: E0917 00:15:52.586844    6532 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758068152586218953  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:222508}  inodes_used:{value:110}}"
	Sep 17 00:15:52 functional-456067 kubelet[6532]: E0917 00:15:52.586908    6532 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758068152586218953  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:222508}  inodes_used:{value:110}}"
	Sep 17 00:15:53 functional-456067 kubelet[6532]: E0917 00:15:53.304606    6532 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="bf17def4-4e0f-4ae8-a19c-3925f04e81e4"
	Sep 17 00:16:02 functional-456067 kubelet[6532]: E0917 00:16:02.589234    6532 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758068162588882272  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:222508}  inodes_used:{value:110}}"
	Sep 17 00:16:02 functional-456067 kubelet[6532]: E0917 00:16:02.589282    6532 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758068162588882272  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:222508}  inodes_used:{value:110}}"
	Sep 17 00:16:08 functional-456067 kubelet[6532]: E0917 00:16:08.308068    6532 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="bf17def4-4e0f-4ae8-a19c-3925f04e81e4"
	Sep 17 00:16:12 functional-456067 kubelet[6532]: E0917 00:16:12.591782    6532 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758068172591342912  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:222508}  inodes_used:{value:110}}"
	Sep 17 00:16:12 functional-456067 kubelet[6532]: E0917 00:16:12.591809    6532 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758068172591342912  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:222508}  inodes_used:{value:110}}"
	Sep 17 00:16:12 functional-456067 kubelet[6532]: E0917 00:16:12.935390    6532 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Sep 17 00:16:12 functional-456067 kubelet[6532]: E0917 00:16:12.935439    6532 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Sep 17 00:16:12 functional-456067 kubelet[6532]: E0917 00:16:12.935685    6532 kuberuntime_manager.go:1449] "Unhandled Error" err="container echo-server start failed in pod hello-node-75c85bcc94-fkpgc_default(a6cc2546-7037-4071-810f-d239693c428b): ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 17 00:16:12 functional-456067 kubelet[6532]: E0917 00:16:12.935725    6532 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-fkpgc" podUID="a6cc2546-7037-4071-810f-d239693c428b"
	
	
	==> storage-provisioner [042dc70379de4d844d8f12b9393bba3b57b1e97c822197d216f26e93222192e7] <==
	W0917 00:15:51.300332       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:15:53.303644       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:15:53.314276       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:15:55.317316       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:15:55.323391       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:15:57.326625       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:15:57.333637       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:15:59.337712       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:15:59.346519       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:16:01.351776       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:16:01.358373       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:16:03.361887       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:16:03.370878       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:16:05.374708       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:16:05.379596       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:16:07.383498       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:16:07.392216       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:16:09.395968       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:16:09.404623       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:16:11.409357       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:16:11.419621       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:16:13.425715       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:16:13.432200       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:16:15.436115       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:16:15.442962       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [d0a85c0699434bfc72e545b837eddb3394f35737b3a9f90481867a606e119008] <==
	I0917 00:09:45.969685       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0917 00:09:45.986739       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0917 00:09:45.988132       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0917 00:09:45.996023       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:09:49.454293       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:09:53.714364       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:09:57.313195       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:00.370163       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:03.392392       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:03.398053       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0917 00:10:03.398172       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0917 00:10:03.398294       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-456067_099082b7-9f11-413f-b380-a5a3ba8127d6!
	I0917 00:10:03.398708       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b6c90970-c155-42ec-acaa-d206fa4df074", APIVersion:"v1", ResourceVersion:"646", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-456067_099082b7-9f11-413f-b380-a5a3ba8127d6 became leader
	W0917 00:10:03.402654       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:03.409950       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0917 00:10:03.498462       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-456067_099082b7-9f11-413f-b380-a5a3ba8127d6!
	W0917 00:10:05.413592       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:05.423123       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:07.429339       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:07.439998       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:09.444422       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:09.449783       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-456067 -n functional-456067
helpers_test.go:269: (dbg) Run:  kubectl --context functional-456067 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-fkpgc sp-pod dashboard-metrics-scraper-77bf4d6c4c-sgrhm kubernetes-dashboard-855c9754f9-vxn8t
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-456067 describe pod busybox-mount hello-node-75c85bcc94-fkpgc sp-pod dashboard-metrics-scraper-77bf4d6c4c-sgrhm kubernetes-dashboard-855c9754f9-vxn8t
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-456067 describe pod busybox-mount hello-node-75c85bcc94-fkpgc sp-pod dashboard-metrics-scraper-77bf4d6c4c-sgrhm kubernetes-dashboard-855c9754f9-vxn8t: exit status 1 (91.195866ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-456067/192.168.50.44
	Start Time:       Wed, 17 Sep 2025 00:11:14 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  cri-o://929f0819cfce05f107cc586915d914ba997c586fcdb094e4bba5c2ea7660752a
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Wed, 17 Sep 2025 00:12:07 +0000
	      Finished:     Wed, 17 Sep 2025 00:12:07 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-p2558 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-p2558:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  5m1s  default-scheduler  Successfully assigned default/busybox-mount to functional-456067
	  Normal  Pulling    5m1s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     4m9s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.272s (51.835s including waiting). Image size: 4631262 bytes.
	  Normal  Created    4m9s  kubelet            Created container: mount-munger
	  Normal  Started    4m9s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-fkpgc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-456067/192.168.50.44
	Start Time:       Wed, 17 Sep 2025 00:11:05 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:           10.244.0.10
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-m64wp (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-m64wp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  5m10s                 default-scheduler  Successfully assigned default/hello-node-75c85bcc94-fkpgc to functional-456067
	  Normal   BackOff    114s (x2 over 4m10s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     114s (x2 over 4m10s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    102s (x3 over 5m10s)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     4s (x3 over 4m10s)    kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     4s (x3 over 4m10s)    kubelet            Error: ErrImagePull
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-456067/192.168.50.44
	Start Time:       Wed, 17 Sep 2025 00:11:00 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cqjcm (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-cqjcm:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  5m15s                  default-scheduler  Successfully assigned default/sp-pod to functional-456067
	  Normal   Pulling    2m13s (x3 over 5m13s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     34s (x3 over 4m41s)    kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     34s (x3 over 4m41s)    kubelet            Error: ErrImagePull
	  Normal   BackOff    8s (x4 over 4m40s)     kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     8s (x4 over 4m40s)     kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-sgrhm" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-vxn8t" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-456067 describe pod busybox-mount hello-node-75c85bcc94-fkpgc sp-pod dashboard-metrics-scraper-77bf4d6c4c-sgrhm kubernetes-dashboard-855c9754f9-vxn8t: exit status 1
E0917 00:16:53.406441  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/addons-772113/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- FAIL: TestFunctional/parallel/DashboardCmd (302.39s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (371.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [2893852c-182d-4f0b-adc7-6cf85183f756] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003985938s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-456067 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-456067 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-456067 get pvc myclaim -o=json
I0917 00:10:58.149185  145530 retry.go:31] will retry after 2.015666602s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:0c040bde-7c33-4da7-af45-4f56f9f7e07e ResourceVersion:823 Generation:0 CreationTimestamp:2025-09-17 00:10:58 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001c269e0 VolumeMode:0xc001c269f0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-456067 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-456067 apply -f testdata/storage-provisioner/pod.yaml
I0917 00:11:00.722786  145530 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [bf17def4-4e0f-4ae8-a19c-3925f04e81e4] Pending
helpers_test.go:352: "sp-pod" [bf17def4-4e0f-4ae8-a19c-3925f04e81e4] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "default" "test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:140: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 6m0s: context deadline exceeded ****
functional_test_pvc_test.go:140: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-456067 -n functional-456067
functional_test_pvc_test.go:140: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-09-17 00:17:01.003665238 +0000 UTC m=+1132.755178684
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-456067 describe po sp-pod -n default
functional_test_pvc_test.go:140: (dbg) kubectl --context functional-456067 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-456067/192.168.50.44
Start Time:       Wed, 17 Sep 2025 00:11:00 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:  10.244.0.9
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cqjcm (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-cqjcm:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  6m                   default-scheduler  Successfully assigned default/sp-pod to functional-456067
Warning  Failed     79s (x3 over 5m26s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     79s (x3 over 5m26s)  kubelet            Error: ErrImagePull
Normal   BackOff    41s (x5 over 5m25s)  kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     41s (x5 over 5m25s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    28s (x4 over 5m58s)  kubelet            Pulling image "docker.io/nginx"
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-456067 logs sp-pod -n default
functional_test_pvc_test.go:140: (dbg) Non-zero exit: kubectl --context functional-456067 logs sp-pod -n default: exit status 1 (74.373219ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:140: kubectl --context functional-456067 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:141: failed waiting for pvctest pod : test=storage-provisioner within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-456067 -n functional-456067
helpers_test.go:252: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-456067 logs -n 25: (1.58801958s)
helpers_test.go:260: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                ARGS                                                                 │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-456067 ssh findmnt -T /mount-9p | grep 9p                                                                                │ functional-456067 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │                     │
	│ mount          │ -p functional-456067 /tmp/TestFunctionalparallelMountCmdspecific-port3572376631/001:/mount-9p --alsologtostderr -v=1 --port 46464   │ functional-456067 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │                     │
	│ ssh            │ functional-456067 ssh findmnt -T /mount-9p | grep 9p                                                                                │ functional-456067 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │ 17 Sep 25 00:12 UTC │
	│ ssh            │ functional-456067 ssh -- ls -la /mount-9p                                                                                           │ functional-456067 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │ 17 Sep 25 00:12 UTC │
	│ ssh            │ functional-456067 ssh sudo umount -f /mount-9p                                                                                      │ functional-456067 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │                     │
	│ mount          │ -p functional-456067 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3539378027/001:/mount1 --alsologtostderr -v=1                  │ functional-456067 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │                     │
	│ ssh            │ functional-456067 ssh findmnt -T /mount1                                                                                            │ functional-456067 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │                     │
	│ mount          │ -p functional-456067 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3539378027/001:/mount3 --alsologtostderr -v=1                  │ functional-456067 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │                     │
	│ mount          │ -p functional-456067 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3539378027/001:/mount2 --alsologtostderr -v=1                  │ functional-456067 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │                     │
	│ ssh            │ functional-456067 ssh findmnt -T /mount1                                                                                            │ functional-456067 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │ 17 Sep 25 00:12 UTC │
	│ ssh            │ functional-456067 ssh findmnt -T /mount2                                                                                            │ functional-456067 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │ 17 Sep 25 00:12 UTC │
	│ ssh            │ functional-456067 ssh findmnt -T /mount3                                                                                            │ functional-456067 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │ 17 Sep 25 00:12 UTC │
	│ mount          │ -p functional-456067 --kill=true                                                                                                    │ functional-456067 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │                     │
	│ start          │ -p functional-456067 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ functional-456067 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │                     │
	│ start          │ -p functional-456067 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false           │ functional-456067 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │                     │
	│ update-context │ functional-456067 update-context --alsologtostderr -v=2                                                                             │ functional-456067 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │ 17 Sep 25 00:12 UTC │
	│ update-context │ functional-456067 update-context --alsologtostderr -v=2                                                                             │ functional-456067 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │ 17 Sep 25 00:12 UTC │
	│ update-context │ functional-456067 update-context --alsologtostderr -v=2                                                                             │ functional-456067 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │ 17 Sep 25 00:12 UTC │
	│ image          │ functional-456067 image ls --format short --alsologtostderr                                                                         │ functional-456067 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │ 17 Sep 25 00:12 UTC │
	│ image          │ functional-456067 image ls --format yaml --alsologtostderr                                                                          │ functional-456067 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │ 17 Sep 25 00:12 UTC │
	│ ssh            │ functional-456067 ssh pgrep buildkitd                                                                                               │ functional-456067 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │                     │
	│ image          │ functional-456067 image build -t localhost/my-image:functional-456067 testdata/build --alsologtostderr                              │ functional-456067 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │ 17 Sep 25 00:12 UTC │
	│ image          │ functional-456067 image ls                                                                                                          │ functional-456067 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │ 17 Sep 25 00:12 UTC │
	│ image          │ functional-456067 image ls --format json --alsologtostderr                                                                          │ functional-456067 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │ 17 Sep 25 00:12 UTC │
	│ image          │ functional-456067 image ls --format table --alsologtostderr                                                                         │ functional-456067 │ jenkins │ v1.37.0 │ 17 Sep 25 00:12 UTC │ 17 Sep 25 00:12 UTC │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/17 00:12:14
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 00:12:14.113902  154467 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:12:14.114156  154467 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:12:14.114165  154467 out.go:374] Setting ErrFile to fd 2...
	I0917 00:12:14.114169  154467 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:12:14.114374  154467 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-141589/.minikube/bin
	I0917 00:12:14.114806  154467 out.go:368] Setting JSON to false
	I0917 00:12:14.115682  154467 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":10478,"bootTime":1758057456,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 00:12:14.115776  154467 start.go:140] virtualization: kvm guest
	I0917 00:12:14.118770  154467 out.go:179] * [functional-456067] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0917 00:12:14.120711  154467 notify.go:220] Checking for updates...
	I0917 00:12:14.120772  154467 out.go:179]   - MINIKUBE_LOCATION=21550
	I0917 00:12:14.122569  154467 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 00:12:14.124357  154467 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-141589/kubeconfig
	I0917 00:12:14.125594  154467 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-141589/.minikube
	I0917 00:12:14.130418  154467 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 00:12:14.131745  154467 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 00:12:14.133275  154467 config.go:182] Loaded profile config "functional-456067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:12:14.133700  154467 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 00:12:14.133798  154467 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 00:12:14.147372  154467 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36991
	I0917 00:12:14.147932  154467 main.go:141] libmachine: () Calling .GetVersion
	I0917 00:12:14.148482  154467 main.go:141] libmachine: Using API Version  1
	I0917 00:12:14.148513  154467 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 00:12:14.149023  154467 main.go:141] libmachine: () Calling .GetMachineName
	I0917 00:12:14.149247  154467 main.go:141] libmachine: (functional-456067) Calling .DriverName
	I0917 00:12:14.149533  154467 driver.go:421] Setting default libvirt URI to qemu:///system
	I0917 00:12:14.149963  154467 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 00:12:14.150012  154467 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 00:12:14.164179  154467 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39459
	I0917 00:12:14.164728  154467 main.go:141] libmachine: () Calling .GetVersion
	I0917 00:12:14.165310  154467 main.go:141] libmachine: Using API Version  1
	I0917 00:12:14.165331  154467 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 00:12:14.165660  154467 main.go:141] libmachine: () Calling .GetMachineName
	I0917 00:12:14.165868  154467 main.go:141] libmachine: (functional-456067) Calling .DriverName
	I0917 00:12:14.196787  154467 out.go:179] * Using the kvm2 driver based on existing profile
	I0917 00:12:14.198209  154467 start.go:304] selected driver: kvm2
	I0917 00:12:14.198255  154467 start.go:918] validating driver "kvm2" against &{Name:functional-456067 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.37.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-456067 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.44 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:12:14.198407  154467 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 00:12:14.199402  154467 cni.go:84] Creating CNI manager for ""
	I0917 00:12:14.199480  154467 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 00:12:14.199546  154467 start.go:348] cluster config:
	{Name:functional-456067 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.37.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-456067 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.44 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:12:14.200951  154467 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Sep 17 00:17:01 functional-456067 crio[5815]: time="2025-09-17 00:17:01.898254929Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758068221898229856,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:222508,},InodesUsed:&UInt64Value{Value:110,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4026fcff-30d9-4539-8325-85adca25704c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 00:17:01 functional-456067 crio[5815]: time="2025-09-17 00:17:01.898923195Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=14814620-36f6-4c11-9567-124a295c4196 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 00:17:01 functional-456067 crio[5815]: time="2025-09-17 00:17:01.899328069Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=14814620-36f6-4c11-9567-124a295c4196 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 00:17:01 functional-456067 crio[5815]: time="2025-09-17 00:17:01.900520766Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:929f0819cfce05f107cc586915d914ba997c586fcdb094e4bba5c2ea7660752a,PodSandboxId:759cbccd052f2166835e9b1a257ad42cafa9c1605a331a89d3d82b04a5bae582,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1758067927541338404,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6736e2e9-c999-4357-b65c-6e99190f152c,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e3fd165c189428df5aab57509f99623291749b001ed12741f449f2b7882a87c,PodSandboxId:ff755b577a7cee0fc8362c63706f53bb1783133965ff2b4e3be9929bdf14b48b,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1758067864926016845,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-7d85dfc575-b8f7n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 75846762-7135-48fa-b2aa-8d1927545a18,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea72295602e69fc60e984f6c9a6d585562117c81fcab649a8342e4fd679735d3,PodSandboxId:5eb45331e9d76f8fa626d2aba2c0e16d298041a32f582a7e0cfb14b7ba23559d,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1758067863341926715,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-5bb876957f-fk8qm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d5aab6dc-8703-4af7-bd0b-093f75de9f53,},Annotations:map[string]string{io.kubernetes.container.hash: a60d665,io.kubernetes.container.ports: [{\"na
me\":\"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a9f839ba278afe3c20abe1de686c7602b2a3be69877fbb9aa07bfe28a5c2d79,PodSandboxId:70a0d4f863edff493fa02649a0af2fe9b34eebed31eafbb924e56824cb934bcd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1758067826645872220,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pcf69,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cebfaed2-8cab-4dd0-8ca6-089cfabdc70e,},Annotations:map[string]string{io.kubernet
es.container.hash: e2e56a4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:042dc70379de4d844d8f12b9393bba3b57b1e97c822197d216f26e93222192e7,PodSandboxId:d584c17a5066fec8aa5659b1cc0984e5255a92d28a3de956712588061283eee1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1758067826579622903,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2893852c-182d-4f0b-adc7-6cf85183f756,},Annotations:map[string]string{io.kubernetes.container.
hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2e634c3104080a0d045995534deb7ad9dd5b458da256e483fc7c789169af35b,PodSandboxId:d863933b38a2a02050c0571b956e843d5be5c9fed3aee41b1f6949670442f46b,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1758067826604845276,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-z9wt2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 698b133d-2da9-43ed-b8ce-879f34603003,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.con
tainer.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8f3af172c0bd637e14426a9b5fdc5cbbc23deceab06225df5426aeb77e9a8f2,PodSandboxId:458f5d6c912fc37c345f30a78f1b973cbcee768e1b05c1ee58abd815bd241199,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Stat
e:CONTAINER_RUNNING,CreatedAt:1758067822986277578,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-456067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2e4a7d9ac9b8e3e4d16e351fdab7f9d,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2a20360f0aee1a4da4cc6a16e88ac133d2bde5bba76901442ce606920448fe8,PodSandboxId:7f86b9e327927f4d00ca5146249610f4944f164385c58455b205c46b8c67f48c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{}
,UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1758067822789905827,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-456067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 774929cd9e929910c78f51089f6ce784,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c82c77796405ca65fc28e9767c340158a833571d00f641613fd2d58bdd14c622,PodSandboxId:b56e2b3d2c075a31eb4bd7720121312b013752f62d57377ed86410cc9629b545,Metadata:&ContainerMetadata{Name:kube-scheduler,Att
empt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1758067822818600550,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-456067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5c865c4ef5aadbdf17a9a89be8d577f,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68da90bb72167b6caafe3982615ce7f907bfb9f03790445580f0c541594277f9,PodSandboxId:d7eeac
2c9a0849cc4d6dd300bc8551c5a0219a2fe20cfce9fe44bad85e813e18,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1758067822726239204,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-456067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f08a5f4e563d186ce013dd3e014ba54,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:009cc8
f40ca65b9b9d1844c8cec12e3aa639d4fa6159eb2b8c08cdaf9e366923,PodSandboxId:f503e182fbce8ab3c9eb489d5c616758ff269cdbb35bfc55e4ff40183a286937,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1758067785712251954,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pcf69,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cebfaed2-8cab-4dd0-8ca6-089cfabdc70e,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0a85c0699434bfc72e545b837eddb3394f35737b
3a9f90481867a606e119008,PodSandboxId:1e23269daf1c84e92ef48de5aa4f28fb4912cd5a1aa31dd416695ada1f606ad7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1758067785735153587,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2893852c-182d-4f0b-adc7-6cf85183f756,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dac1aca53e9bdc7d9a3710b2ccda65b1e62a52c7cbd842fc18aeeb
26b25c09ec,PodSandboxId:1bfab32a2850088645750281210cfc6b4b54dc75234593f672b57682ea2d33cf,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1758067785707912229,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-z9wt2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 698b133d-2da9-43ed-b8ce-879f34603003,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"rea
diness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f96fd4e7775ff38539b28775afa2831a2e91c348e198c6eff2c89008e7335c0b,PodSandboxId:4a8d05ebc9a83fe10215287312c3c3b0b90ea010413ffe9a8e88ac3e16f6cdcb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1758067782013953312,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-456067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5c865c4ef5aadbdf17a9a89be8d577f,},Annotations:map[string
]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c0cc6452fe55a4fc2cb24900ba6a1bc794c734b614e67a42c3f8228f2947ca3,PodSandboxId:c86c3dbf05534f7f2de19b239bb40870b1d98e0029133a337706a3c510caff82,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1758067781971731001,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-fun
ctional-456067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 774929cd9e929910c78f51089f6ce784,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa50634c06db9a61f344a7426395967b47f5fe13c1ca9ac1b2b48952c05614b5,PodSandboxId:379010ae968c7e55fc6fb81cec62140f4de438ac3704d3384b918ec846207ee0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1758067781910668039,Labels:map[st
ring]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-456067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f08a5f4e563d186ce013dd3e014ba54,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=14814620-36f6-4c11-9567-124a295c4196 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 00:17:01 functional-456067 crio[5815]: time="2025-09-17 00:17:01.948814034Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6bb0de8b-b48b-45dd-9350-961102eda07a name=/runtime.v1.RuntimeService/Version
	Sep 17 00:17:01 functional-456067 crio[5815]: time="2025-09-17 00:17:01.948907477Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6bb0de8b-b48b-45dd-9350-961102eda07a name=/runtime.v1.RuntimeService/Version
	Sep 17 00:17:01 functional-456067 crio[5815]: time="2025-09-17 00:17:01.950706877Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=131638d3-8837-4342-b9bb-8136ade75ea7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 00:17:01 functional-456067 crio[5815]: time="2025-09-17 00:17:01.951677808Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758068221951648316,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:222508,},InodesUsed:&UInt64Value{Value:110,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=131638d3-8837-4342-b9bb-8136ade75ea7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 00:17:01 functional-456067 crio[5815]: time="2025-09-17 00:17:01.952270593Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d2b9d39d-c618-4aad-abd4-02944db8e7fd name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 00:17:01 functional-456067 crio[5815]: time="2025-09-17 00:17:01.952343412Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d2b9d39d-c618-4aad-abd4-02944db8e7fd name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 00:17:01 functional-456067 crio[5815]: time="2025-09-17 00:17:01.952803934Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:929f0819cfce05f107cc586915d914ba997c586fcdb094e4bba5c2ea7660752a,PodSandboxId:759cbccd052f2166835e9b1a257ad42cafa9c1605a331a89d3d82b04a5bae582,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1758067927541338404,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6736e2e9-c999-4357-b65c-6e99190f152c,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e3fd165c189428df5aab57509f99623291749b001ed12741f449f2b7882a87c,PodSandboxId:ff755b577a7cee0fc8362c63706f53bb1783133965ff2b4e3be9929bdf14b48b,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1758067864926016845,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-7d85dfc575-b8f7n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 75846762-7135-48fa-b2aa-8d1927545a18,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea72295602e69fc60e984f6c9a6d585562117c81fcab649a8342e4fd679735d3,PodSandboxId:5eb45331e9d76f8fa626d2aba2c0e16d298041a32f582a7e0cfb14b7ba23559d,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1758067863341926715,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-5bb876957f-fk8qm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d5aab6dc-8703-4af7-bd0b-093f75de9f53,},Annotations:map[string]string{io.kubernetes.container.hash: a60d665,io.kubernetes.container.ports: [{\"na
me\":\"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a9f839ba278afe3c20abe1de686c7602b2a3be69877fbb9aa07bfe28a5c2d79,PodSandboxId:70a0d4f863edff493fa02649a0af2fe9b34eebed31eafbb924e56824cb934bcd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1758067826645872220,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pcf69,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cebfaed2-8cab-4dd0-8ca6-089cfabdc70e,},Annotations:map[string]string{io.kubernet
es.container.hash: e2e56a4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:042dc70379de4d844d8f12b9393bba3b57b1e97c822197d216f26e93222192e7,PodSandboxId:d584c17a5066fec8aa5659b1cc0984e5255a92d28a3de956712588061283eee1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1758067826579622903,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2893852c-182d-4f0b-adc7-6cf85183f756,},Annotations:map[string]string{io.kubernetes.container.
hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2e634c3104080a0d045995534deb7ad9dd5b458da256e483fc7c789169af35b,PodSandboxId:d863933b38a2a02050c0571b956e843d5be5c9fed3aee41b1f6949670442f46b,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1758067826604845276,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-z9wt2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 698b133d-2da9-43ed-b8ce-879f34603003,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.con
tainer.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8f3af172c0bd637e14426a9b5fdc5cbbc23deceab06225df5426aeb77e9a8f2,PodSandboxId:458f5d6c912fc37c345f30a78f1b973cbcee768e1b05c1ee58abd815bd241199,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Stat
e:CONTAINER_RUNNING,CreatedAt:1758067822986277578,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-456067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2e4a7d9ac9b8e3e4d16e351fdab7f9d,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2a20360f0aee1a4da4cc6a16e88ac133d2bde5bba76901442ce606920448fe8,PodSandboxId:7f86b9e327927f4d00ca5146249610f4944f164385c58455b205c46b8c67f48c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{}
,UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1758067822789905827,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-456067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 774929cd9e929910c78f51089f6ce784,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c82c77796405ca65fc28e9767c340158a833571d00f641613fd2d58bdd14c622,PodSandboxId:b56e2b3d2c075a31eb4bd7720121312b013752f62d57377ed86410cc9629b545,Metadata:&ContainerMetadata{Name:kube-scheduler,Att
empt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1758067822818600550,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-456067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5c865c4ef5aadbdf17a9a89be8d577f,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68da90bb72167b6caafe3982615ce7f907bfb9f03790445580f0c541594277f9,PodSandboxId:d7eeac
2c9a0849cc4d6dd300bc8551c5a0219a2fe20cfce9fe44bad85e813e18,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1758067822726239204,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-456067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f08a5f4e563d186ce013dd3e014ba54,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:009cc8
f40ca65b9b9d1844c8cec12e3aa639d4fa6159eb2b8c08cdaf9e366923,PodSandboxId:f503e182fbce8ab3c9eb489d5c616758ff269cdbb35bfc55e4ff40183a286937,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1758067785712251954,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pcf69,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cebfaed2-8cab-4dd0-8ca6-089cfabdc70e,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0a85c0699434bfc72e545b837eddb3394f35737b
3a9f90481867a606e119008,PodSandboxId:1e23269daf1c84e92ef48de5aa4f28fb4912cd5a1aa31dd416695ada1f606ad7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1758067785735153587,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2893852c-182d-4f0b-adc7-6cf85183f756,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dac1aca53e9bdc7d9a3710b2ccda65b1e62a52c7cbd842fc18aeeb
26b25c09ec,PodSandboxId:1bfab32a2850088645750281210cfc6b4b54dc75234593f672b57682ea2d33cf,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1758067785707912229,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-z9wt2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 698b133d-2da9-43ed-b8ce-879f34603003,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"rea
diness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f96fd4e7775ff38539b28775afa2831a2e91c348e198c6eff2c89008e7335c0b,PodSandboxId:4a8d05ebc9a83fe10215287312c3c3b0b90ea010413ffe9a8e88ac3e16f6cdcb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1758067782013953312,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-456067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5c865c4ef5aadbdf17a9a89be8d577f,},Annotations:map[string
]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c0cc6452fe55a4fc2cb24900ba6a1bc794c734b614e67a42c3f8228f2947ca3,PodSandboxId:c86c3dbf05534f7f2de19b239bb40870b1d98e0029133a337706a3c510caff82,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1758067781971731001,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-fun
ctional-456067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 774929cd9e929910c78f51089f6ce784,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa50634c06db9a61f344a7426395967b47f5fe13c1ca9ac1b2b48952c05614b5,PodSandboxId:379010ae968c7e55fc6fb81cec62140f4de438ac3704d3384b918ec846207ee0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1758067781910668039,Labels:map[st
ring]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-456067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f08a5f4e563d186ce013dd3e014ba54,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d2b9d39d-c618-4aad-abd4-02944db8e7fd name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 00:17:01 functional-456067 crio[5815]: time="2025-09-17 00:17:01.990429471Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1b72d44f-a53d-4f7e-818a-9971634c0425 name=/runtime.v1.RuntimeService/Version
	Sep 17 00:17:01 functional-456067 crio[5815]: time="2025-09-17 00:17:01.990816008Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1b72d44f-a53d-4f7e-818a-9971634c0425 name=/runtime.v1.RuntimeService/Version
	Sep 17 00:17:01 functional-456067 crio[5815]: time="2025-09-17 00:17:01.993016779Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2dde9261-ace0-4205-b72f-3b35b74b29b1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 00:17:01 functional-456067 crio[5815]: time="2025-09-17 00:17:01.993877240Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758068221993848246,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:222508,},InodesUsed:&UInt64Value{Value:110,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2dde9261-ace0-4205-b72f-3b35b74b29b1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 00:17:01 functional-456067 crio[5815]: time="2025-09-17 00:17:01.994743825Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=63dc7ac4-d92e-4e5b-97cd-73f0b08ec17a name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 00:17:01 functional-456067 crio[5815]: time="2025-09-17 00:17:01.994910146Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=63dc7ac4-d92e-4e5b-97cd-73f0b08ec17a name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 00:17:01 functional-456067 crio[5815]: time="2025-09-17 00:17:01.995465937Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:929f0819cfce05f107cc586915d914ba997c586fcdb094e4bba5c2ea7660752a,PodSandboxId:759cbccd052f2166835e9b1a257ad42cafa9c1605a331a89d3d82b04a5bae582,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1758067927541338404,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6736e2e9-c999-4357-b65c-6e99190f152c,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e3fd165c189428df5aab57509f99623291749b001ed12741f449f2b7882a87c,PodSandboxId:ff755b577a7cee0fc8362c63706f53bb1783133965ff2b4e3be9929bdf14b48b,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1758067864926016845,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-7d85dfc575-b8f7n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 75846762-7135-48fa-b2aa-8d1927545a18,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea72295602e69fc60e984f6c9a6d585562117c81fcab649a8342e4fd679735d3,PodSandboxId:5eb45331e9d76f8fa626d2aba2c0e16d298041a32f582a7e0cfb14b7ba23559d,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1758067863341926715,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-5bb876957f-fk8qm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d5aab6dc-8703-4af7-bd0b-093f75de9f53,},Annotations:map[string]string{io.kubernetes.container.hash: a60d665,io.kubernetes.container.ports: [{\"na
me\":\"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a9f839ba278afe3c20abe1de686c7602b2a3be69877fbb9aa07bfe28a5c2d79,PodSandboxId:70a0d4f863edff493fa02649a0af2fe9b34eebed31eafbb924e56824cb934bcd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1758067826645872220,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pcf69,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cebfaed2-8cab-4dd0-8ca6-089cfabdc70e,},Annotations:map[string]string{io.kubernet
es.container.hash: e2e56a4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:042dc70379de4d844d8f12b9393bba3b57b1e97c822197d216f26e93222192e7,PodSandboxId:d584c17a5066fec8aa5659b1cc0984e5255a92d28a3de956712588061283eee1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1758067826579622903,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2893852c-182d-4f0b-adc7-6cf85183f756,},Annotations:map[string]string{io.kubernetes.container.
hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2e634c3104080a0d045995534deb7ad9dd5b458da256e483fc7c789169af35b,PodSandboxId:d863933b38a2a02050c0571b956e843d5be5c9fed3aee41b1f6949670442f46b,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1758067826604845276,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-z9wt2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 698b133d-2da9-43ed-b8ce-879f34603003,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.con
tainer.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8f3af172c0bd637e14426a9b5fdc5cbbc23deceab06225df5426aeb77e9a8f2,PodSandboxId:458f5d6c912fc37c345f30a78f1b973cbcee768e1b05c1ee58abd815bd241199,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Stat
e:CONTAINER_RUNNING,CreatedAt:1758067822986277578,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-456067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2e4a7d9ac9b8e3e4d16e351fdab7f9d,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2a20360f0aee1a4da4cc6a16e88ac133d2bde5bba76901442ce606920448fe8,PodSandboxId:7f86b9e327927f4d00ca5146249610f4944f164385c58455b205c46b8c67f48c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{}
,UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1758067822789905827,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-456067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 774929cd9e929910c78f51089f6ce784,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c82c77796405ca65fc28e9767c340158a833571d00f641613fd2d58bdd14c622,PodSandboxId:b56e2b3d2c075a31eb4bd7720121312b013752f62d57377ed86410cc9629b545,Metadata:&ContainerMetadata{Name:kube-scheduler,Att
empt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1758067822818600550,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-456067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5c865c4ef5aadbdf17a9a89be8d577f,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68da90bb72167b6caafe3982615ce7f907bfb9f03790445580f0c541594277f9,PodSandboxId:d7eeac
2c9a0849cc4d6dd300bc8551c5a0219a2fe20cfce9fe44bad85e813e18,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1758067822726239204,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-456067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f08a5f4e563d186ce013dd3e014ba54,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:009cc8
f40ca65b9b9d1844c8cec12e3aa639d4fa6159eb2b8c08cdaf9e366923,PodSandboxId:f503e182fbce8ab3c9eb489d5c616758ff269cdbb35bfc55e4ff40183a286937,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1758067785712251954,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pcf69,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cebfaed2-8cab-4dd0-8ca6-089cfabdc70e,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0a85c0699434bfc72e545b837eddb3394f35737b
3a9f90481867a606e119008,PodSandboxId:1e23269daf1c84e92ef48de5aa4f28fb4912cd5a1aa31dd416695ada1f606ad7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1758067785735153587,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2893852c-182d-4f0b-adc7-6cf85183f756,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dac1aca53e9bdc7d9a3710b2ccda65b1e62a52c7cbd842fc18aeeb
26b25c09ec,PodSandboxId:1bfab32a2850088645750281210cfc6b4b54dc75234593f672b57682ea2d33cf,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1758067785707912229,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-z9wt2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 698b133d-2da9-43ed-b8ce-879f34603003,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"rea
diness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f96fd4e7775ff38539b28775afa2831a2e91c348e198c6eff2c89008e7335c0b,PodSandboxId:4a8d05ebc9a83fe10215287312c3c3b0b90ea010413ffe9a8e88ac3e16f6cdcb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1758067782013953312,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-456067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5c865c4ef5aadbdf17a9a89be8d577f,},Annotations:map[string
]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c0cc6452fe55a4fc2cb24900ba6a1bc794c734b614e67a42c3f8228f2947ca3,PodSandboxId:c86c3dbf05534f7f2de19b239bb40870b1d98e0029133a337706a3c510caff82,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1758067781971731001,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-fun
ctional-456067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 774929cd9e929910c78f51089f6ce784,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa50634c06db9a61f344a7426395967b47f5fe13c1ca9ac1b2b48952c05614b5,PodSandboxId:379010ae968c7e55fc6fb81cec62140f4de438ac3704d3384b918ec846207ee0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1758067781910668039,Labels:map[st
ring]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-456067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f08a5f4e563d186ce013dd3e014ba54,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=63dc7ac4-d92e-4e5b-97cd-73f0b08ec17a name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 00:17:02 functional-456067 crio[5815]: time="2025-09-17 00:17:02.039393763Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e25e25b2-7f40-40f5-9a49-5cd0ffe487af name=/runtime.v1.RuntimeService/Version
	Sep 17 00:17:02 functional-456067 crio[5815]: time="2025-09-17 00:17:02.039739150Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e25e25b2-7f40-40f5-9a49-5cd0ffe487af name=/runtime.v1.RuntimeService/Version
	Sep 17 00:17:02 functional-456067 crio[5815]: time="2025-09-17 00:17:02.042296564Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9eead9e1-a405-4465-9e08-f083da0e870d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 00:17:02 functional-456067 crio[5815]: time="2025-09-17 00:17:02.043164879Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758068222043139973,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:222508,},InodesUsed:&UInt64Value{Value:110,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9eead9e1-a405-4465-9e08-f083da0e870d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 00:17:02 functional-456067 crio[5815]: time="2025-09-17 00:17:02.043804814Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ad728c9d-6906-48d9-9350-9f3cf5418fe8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 00:17:02 functional-456067 crio[5815]: time="2025-09-17 00:17:02.043882417Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ad728c9d-6906-48d9-9350-9f3cf5418fe8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 00:17:02 functional-456067 crio[5815]: time="2025-09-17 00:17:02.044237485Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:929f0819cfce05f107cc586915d914ba997c586fcdb094e4bba5c2ea7660752a,PodSandboxId:759cbccd052f2166835e9b1a257ad42cafa9c1605a331a89d3d82b04a5bae582,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1758067927541338404,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6736e2e9-c999-4357-b65c-6e99190f152c,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e3fd165c189428df5aab57509f99623291749b001ed12741f449f2b7882a87c,PodSandboxId:ff755b577a7cee0fc8362c63706f53bb1783133965ff2b4e3be9929bdf14b48b,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1758067864926016845,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-7d85dfc575-b8f7n,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 75846762-7135-48fa-b2aa-8d1927545a18,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea72295602e69fc60e984f6c9a6d585562117c81fcab649a8342e4fd679735d3,PodSandboxId:5eb45331e9d76f8fa626d2aba2c0e16d298041a32f582a7e0cfb14b7ba23559d,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1758067863341926715,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-5bb876957f-fk8qm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d5aab6dc-8703-4af7-bd0b-093f75de9f53,},Annotations:map[string]string{io.kubernetes.container.hash: a60d665,io.kubernetes.container.ports: [{\"na
me\":\"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a9f839ba278afe3c20abe1de686c7602b2a3be69877fbb9aa07bfe28a5c2d79,PodSandboxId:70a0d4f863edff493fa02649a0af2fe9b34eebed31eafbb924e56824cb934bcd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1758067826645872220,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pcf69,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cebfaed2-8cab-4dd0-8ca6-089cfabdc70e,},Annotations:map[string]string{io.kubernet
es.container.hash: e2e56a4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:042dc70379de4d844d8f12b9393bba3b57b1e97c822197d216f26e93222192e7,PodSandboxId:d584c17a5066fec8aa5659b1cc0984e5255a92d28a3de956712588061283eee1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1758067826579622903,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2893852c-182d-4f0b-adc7-6cf85183f756,},Annotations:map[string]string{io.kubernetes.container.
hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2e634c3104080a0d045995534deb7ad9dd5b458da256e483fc7c789169af35b,PodSandboxId:d863933b38a2a02050c0571b956e843d5be5c9fed3aee41b1f6949670442f46b,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1758067826604845276,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-z9wt2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 698b133d-2da9-43ed-b8ce-879f34603003,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.con
tainer.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8f3af172c0bd637e14426a9b5fdc5cbbc23deceab06225df5426aeb77e9a8f2,PodSandboxId:458f5d6c912fc37c345f30a78f1b973cbcee768e1b05c1ee58abd815bd241199,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Stat
e:CONTAINER_RUNNING,CreatedAt:1758067822986277578,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-456067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2e4a7d9ac9b8e3e4d16e351fdab7f9d,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2a20360f0aee1a4da4cc6a16e88ac133d2bde5bba76901442ce606920448fe8,PodSandboxId:7f86b9e327927f4d00ca5146249610f4944f164385c58455b205c46b8c67f48c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{}
,UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1758067822789905827,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-456067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 774929cd9e929910c78f51089f6ce784,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c82c77796405ca65fc28e9767c340158a833571d00f641613fd2d58bdd14c622,PodSandboxId:b56e2b3d2c075a31eb4bd7720121312b013752f62d57377ed86410cc9629b545,Metadata:&ContainerMetadata{Name:kube-scheduler,Att
empt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1758067822818600550,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-456067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5c865c4ef5aadbdf17a9a89be8d577f,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68da90bb72167b6caafe3982615ce7f907bfb9f03790445580f0c541594277f9,PodSandboxId:d7eeac
2c9a0849cc4d6dd300bc8551c5a0219a2fe20cfce9fe44bad85e813e18,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1758067822726239204,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-456067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f08a5f4e563d186ce013dd3e014ba54,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:009cc8
f40ca65b9b9d1844c8cec12e3aa639d4fa6159eb2b8c08cdaf9e366923,PodSandboxId:f503e182fbce8ab3c9eb489d5c616758ff269cdbb35bfc55e4ff40183a286937,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1758067785712251954,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pcf69,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cebfaed2-8cab-4dd0-8ca6-089cfabdc70e,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0a85c0699434bfc72e545b837eddb3394f35737b
3a9f90481867a606e119008,PodSandboxId:1e23269daf1c84e92ef48de5aa4f28fb4912cd5a1aa31dd416695ada1f606ad7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1758067785735153587,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2893852c-182d-4f0b-adc7-6cf85183f756,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dac1aca53e9bdc7d9a3710b2ccda65b1e62a52c7cbd842fc18aeeb
26b25c09ec,PodSandboxId:1bfab32a2850088645750281210cfc6b4b54dc75234593f672b57682ea2d33cf,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1758067785707912229,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-z9wt2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 698b133d-2da9-43ed-b8ce-879f34603003,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"rea
diness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f96fd4e7775ff38539b28775afa2831a2e91c348e198c6eff2c89008e7335c0b,PodSandboxId:4a8d05ebc9a83fe10215287312c3c3b0b90ea010413ffe9a8e88ac3e16f6cdcb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1758067782013953312,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-456067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5c865c4ef5aadbdf17a9a89be8d577f,},Annotations:map[string
]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c0cc6452fe55a4fc2cb24900ba6a1bc794c734b614e67a42c3f8228f2947ca3,PodSandboxId:c86c3dbf05534f7f2de19b239bb40870b1d98e0029133a337706a3c510caff82,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1758067781971731001,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-fun
ctional-456067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 774929cd9e929910c78f51089f6ce784,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa50634c06db9a61f344a7426395967b47f5fe13c1ca9ac1b2b48952c05614b5,PodSandboxId:379010ae968c7e55fc6fb81cec62140f4de438ac3704d3384b918ec846207ee0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1758067781910668039,Labels:map[st
ring]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-456067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f08a5f4e563d186ce013dd3e014ba54,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ad728c9d-6906-48d9-9350-9f3cf5418fe8 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	929f0819cfce0       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e     4 minutes ago       Exited              mount-munger              0                   759cbccd052f2       busybox-mount
	6e3fd165c1894       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6   5 minutes ago       Running             echo-server               0                   ff755b577a7ce       hello-node-connect-7d85dfc575-b8f7n
	ea72295602e69       docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb         5 minutes ago       Running             mysql                     0                   5eb45331e9d76       mysql-5bb876957f-fk8qm
	4a9f839ba278a       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                        6 minutes ago       Running             kube-proxy                3                   70a0d4f863edf       kube-proxy-pcf69
	c2e634c310408       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                        6 minutes ago       Running             coredns                   3                   d863933b38a2a       coredns-66bc5c9577-z9wt2
	042dc70379de4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                        6 minutes ago       Running             storage-provisioner       4                   d584c17a5066f       storage-provisioner
	c8f3af172c0bd       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                        6 minutes ago       Running             kube-apiserver            0                   458f5d6c912fc       kube-apiserver-functional-456067
	c82c77796405c       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                        6 minutes ago       Running             kube-scheduler            3                   b56e2b3d2c075       kube-scheduler-functional-456067
	c2a20360f0aee       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                        6 minutes ago       Running             kube-controller-manager   3                   7f86b9e327927       kube-controller-manager-functional-456067
	68da90bb72167       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                        6 minutes ago       Running             etcd                      3                   d7eeac2c9a084       etcd-functional-456067
	d0a85c0699434       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                        7 minutes ago       Exited              storage-provisioner       3                   1e23269daf1c8       storage-provisioner
	009cc8f40ca65       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                        7 minutes ago       Exited              kube-proxy                2                   f503e182fbce8       kube-proxy-pcf69
	dac1aca53e9bd       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                        7 minutes ago       Exited              coredns                   2                   1bfab32a28500       coredns-66bc5c9577-z9wt2
	f96fd4e7775ff       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                        7 minutes ago       Exited              kube-scheduler            2                   4a8d05ebc9a83       kube-scheduler-functional-456067
	4c0cc6452fe55       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                        7 minutes ago       Exited              kube-controller-manager   2                   c86c3dbf05534       kube-controller-manager-functional-456067
	aa50634c06db9       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                        7 minutes ago       Exited              etcd                      2                   379010ae968c7       etcd-functional-456067
	
	
	==> coredns [c2e634c3104080a0d045995534deb7ad9dd5b458da256e483fc7c789169af35b] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51773 - 29704 "HINFO IN 6836174930101226771.3542159233259123785. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.462094087s
	
	
	==> coredns [dac1aca53e9bdc7d9a3710b2ccda65b1e62a52c7cbd842fc18aeeb26b25c09ec] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60371 - 54073 "HINFO IN 8748564013944487444.4424170425081406606. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.117396738s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-456067
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-456067
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=functional-456067
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_17T00_08_24_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Sep 2025 00:08:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-456067
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:16:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:12:27 +0000   Wed, 17 Sep 2025 00:08:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:12:27 +0000   Wed, 17 Sep 2025 00:08:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:12:27 +0000   Wed, 17 Sep 2025 00:08:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:12:27 +0000   Wed, 17 Sep 2025 00:08:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.44
	  Hostname:    functional-456067
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008596Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008596Ki
	  pods:               110
	System Info:
	  Machine ID:                 58a0f112b18c4204bc54cae70ae412b4
	  System UUID:                58a0f112-b18c-4204-bc54-cae70ae412b4
	  Boot ID:                    ad612535-5cba-4164-9f8c-d12fdd7b5bac
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-fkpgc                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m57s
	  default                     hello-node-connect-7d85dfc575-b8f7n           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m9s
	  default                     mysql-5bb876957f-fk8qm                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    6m12s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m2s
	  kube-system                 coredns-66bc5c9577-z9wt2                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     8m33s
	  kube-system                 etcd-functional-456067                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         8m39s
	  kube-system                 kube-apiserver-functional-456067              250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m36s
	  kube-system                 kube-controller-manager-functional-456067     200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m39s
	  kube-system                 kube-proxy-pcf69                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m33s
	  kube-system                 kube-scheduler-functional-456067              100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m40s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m31s
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-sgrhm    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m47s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-vxn8t         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m31s                  kube-proxy       
	  Normal  Starting                 6m35s                  kube-proxy       
	  Normal  Starting                 7m16s                  kube-proxy       
	  Normal  Starting                 7m40s                  kube-proxy       
	  Normal  Starting                 8m47s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    8m46s (x8 over 8m47s)  kubelet          Node functional-456067 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m46s (x7 over 8m47s)  kubelet          Node functional-456067 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  8m46s (x8 over 8m47s)  kubelet          Node functional-456067 status is now: NodeHasSufficientMemory
	  Normal  Starting                 8m39s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m39s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m39s                  kubelet          Node functional-456067 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m39s                  kubelet          Node functional-456067 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m39s                  kubelet          Node functional-456067 status is now: NodeHasSufficientPID
	  Normal  NodeReady                8m38s                  kubelet          Node functional-456067 status is now: NodeReady
	  Normal  RegisteredNode           8m34s                  node-controller  Node functional-456067 event: Registered Node functional-456067 in Controller
	  Normal  RegisteredNode           7m38s                  node-controller  Node functional-456067 event: Registered Node functional-456067 in Controller
	  Normal  NodeAllocatableEnforced  7m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m21s (x8 over 7m21s)  kubelet          Node functional-456067 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m21s (x8 over 7m21s)  kubelet          Node functional-456067 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m21s (x7 over 7m21s)  kubelet          Node functional-456067 status is now: NodeHasSufficientPID
	  Normal  Starting                 7m21s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           7m14s                  node-controller  Node functional-456067 event: Registered Node functional-456067 in Controller
	  Normal  Starting                 6m40s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m40s (x8 over 6m40s)  kubelet          Node functional-456067 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m40s (x8 over 6m40s)  kubelet          Node functional-456067 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m40s (x7 over 6m40s)  kubelet          Node functional-456067 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m40s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m33s                  node-controller  Node functional-456067 event: Registered Node functional-456067 in Controller
	
	
	==> dmesg <==
	[  +0.002590] (rpcbind)[120]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[Sep17 00:08] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000049] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.090099] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.102682] kauditd_printk_skb: 130 callbacks suppressed
	[  +0.157646] kauditd_printk_skb: 171 callbacks suppressed
	[  +6.339647] kauditd_printk_skb: 18 callbacks suppressed
	[ +10.675252] kauditd_printk_skb: 252 callbacks suppressed
	[Sep17 00:09] kauditd_printk_skb: 38 callbacks suppressed
	[  +4.638331] kauditd_printk_skb: 328 callbacks suppressed
	[  +3.486901] kauditd_printk_skb: 3 callbacks suppressed
	[  +0.133926] kauditd_printk_skb: 31 callbacks suppressed
	[  +5.144337] kauditd_printk_skb: 126 callbacks suppressed
	[  +8.158591] kauditd_printk_skb: 2 callbacks suppressed
	[Sep17 00:10] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.772756] kauditd_printk_skb: 273 callbacks suppressed
	[  +1.774509] kauditd_printk_skb: 119 callbacks suppressed
	[ +14.656307] kauditd_printk_skb: 2 callbacks suppressed
	[  +1.993182] kauditd_printk_skb: 97 callbacks suppressed
	[Sep17 00:11] kauditd_printk_skb: 80 callbacks suppressed
	[  +0.000181] kauditd_printk_skb: 110 callbacks suppressed
	[Sep17 00:12] kauditd_printk_skb: 110 callbacks suppressed
	[  +5.078844] crun[9816]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	[  +2.661971] kauditd_printk_skb: 31 callbacks suppressed
	
	
	==> etcd [68da90bb72167b6caafe3982615ce7f907bfb9f03790445580f0c541594277f9] <==
	{"level":"info","ts":"2025-09-17T00:11:00.695445Z","caller":"traceutil/trace.go:172","msg":"trace[1510088569] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:832; }","duration":"286.07441ms","start":"2025-09-17T00:11:00.409365Z","end":"2025-09-17T00:11:00.695439Z","steps":["trace[1510088569] 'agreement among raft nodes before linearized reading'  (duration: 286.014137ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-17T00:11:00.695863Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"239.128174ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1117"}
	{"level":"info","ts":"2025-09-17T00:11:00.695911Z","caller":"traceutil/trace.go:172","msg":"trace[1748402899] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:833; }","duration":"239.183754ms","start":"2025-09-17T00:11:00.456721Z","end":"2025-09-17T00:11:00.695905Z","steps":["trace[1748402899] 'agreement among raft nodes before linearized reading'  (duration: 239.073747ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-17T00:11:00.696031Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"138.421068ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-17T00:11:00.696045Z","caller":"traceutil/trace.go:172","msg":"trace[1693471884] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:833; }","duration":"138.438468ms","start":"2025-09-17T00:11:00.557602Z","end":"2025-09-17T00:11:00.696041Z","steps":["trace[1693471884] 'agreement among raft nodes before linearized reading'  (duration: 138.408273ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-17T00:11:00.696241Z","caller":"traceutil/trace.go:172","msg":"trace[446402970] transaction","detail":"{read_only:false; response_revision:833; number_of_response:1; }","duration":"313.533847ms","start":"2025-09-17T00:11:00.382455Z","end":"2025-09-17T00:11:00.695989Z","steps":["trace[446402970] 'process raft request'  (duration: 313.255426ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-17T00:11:00.697512Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-17T00:11:00.382435Z","time spent":"313.835605ms","remote":"127.0.0.1:55588","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1934,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/default/sp-pod\" mod_revision:0 > success:<request_put:<key:\"/registry/pods/default/sp-pod\" value_size:1897 >> failure:<>"}
	{"level":"info","ts":"2025-09-17T00:11:02.791241Z","caller":"traceutil/trace.go:172","msg":"trace[1349327765] linearizableReadLoop","detail":"{readStateIndex:917; appliedIndex:917; }","duration":"233.544576ms","start":"2025-09-17T00:11:02.557682Z","end":"2025-09-17T00:11:02.791227Z","steps":["trace[1349327765] 'read index received'  (duration: 233.539322ms)","trace[1349327765] 'applied index is now lower than readState.Index'  (duration: 4.591µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-17T00:11:02.791363Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"233.687839ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-17T00:11:02.791383Z","caller":"traceutil/trace.go:172","msg":"trace[1556255295] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:837; }","duration":"233.724452ms","start":"2025-09-17T00:11:02.557652Z","end":"2025-09-17T00:11:02.791377Z","steps":["trace[1556255295] 'agreement among raft nodes before linearized reading'  (duration: 233.667633ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-17T00:11:02.977723Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"185.991027ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6201343802298446009 > lease_revoke:<id:560f99550211afd9>","response":"size:28"}
	{"level":"info","ts":"2025-09-17T00:11:02.977803Z","caller":"traceutil/trace.go:172","msg":"trace[1181892876] linearizableReadLoop","detail":"{readStateIndex:918; appliedIndex:917; }","duration":"186.498723ms","start":"2025-09-17T00:11:02.791295Z","end":"2025-09-17T00:11:02.977794Z","steps":["trace[1181892876] 'read index received'  (duration: 14.053µs)","trace[1181892876] 'applied index is now lower than readState.Index'  (duration: 186.483999ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-17T00:11:02.977855Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"263.342306ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-17T00:11:02.977867Z","caller":"traceutil/trace.go:172","msg":"trace[664484668] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:837; }","duration":"263.362237ms","start":"2025-09-17T00:11:02.714500Z","end":"2025-09-17T00:11:02.977862Z","steps":["trace[664484668] 'agreement among raft nodes before linearized reading'  (duration: 263.320779ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-17T00:11:02.978172Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"241.493127ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1117"}
	{"level":"info","ts":"2025-09-17T00:11:02.978212Z","caller":"traceutil/trace.go:172","msg":"trace[1958673148] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:837; }","duration":"241.53884ms","start":"2025-09-17T00:11:02.736667Z","end":"2025-09-17T00:11:02.978206Z","steps":["trace[1958673148] 'agreement among raft nodes before linearized reading'  (duration: 241.419533ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-17T00:11:09.383251Z","caller":"traceutil/trace.go:172","msg":"trace[1850063118] linearizableReadLoop","detail":"{readStateIndex:961; appliedIndex:961; }","duration":"110.602231ms","start":"2025-09-17T00:11:09.272632Z","end":"2025-09-17T00:11:09.383234Z","steps":["trace[1850063118] 'read index received'  (duration: 110.597648ms)","trace[1850063118] 'applied index is now lower than readState.Index'  (duration: 3.919µs)"],"step_count":2}
	{"level":"info","ts":"2025-09-17T00:11:09.383406Z","caller":"traceutil/trace.go:172","msg":"trace[793459114] transaction","detail":"{read_only:false; response_revision:880; number_of_response:1; }","duration":"341.178896ms","start":"2025-09-17T00:11:09.042217Z","end":"2025-09-17T00:11:09.383396Z","steps":["trace[793459114] 'process raft request'  (duration: 341.07646ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-17T00:11:09.383993Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-17T00:11:09.042199Z","time spent":"341.593927ms","remote":"127.0.0.1:55552","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:877 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2025-09-17T00:11:09.385228Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"111.904897ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-17T00:11:09.385352Z","caller":"traceutil/trace.go:172","msg":"trace[863660152] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:880; }","duration":"112.716727ms","start":"2025-09-17T00:11:09.272627Z","end":"2025-09-17T00:11:09.385343Z","steps":["trace[863660152] 'agreement among raft nodes before linearized reading'  (duration: 110.909223ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-17T00:11:11.692694Z","caller":"traceutil/trace.go:172","msg":"trace[1075119690] transaction","detail":"{read_only:false; response_revision:881; number_of_response:1; }","duration":"279.973043ms","start":"2025-09-17T00:11:11.412708Z","end":"2025-09-17T00:11:11.692682Z","steps":["trace[1075119690] 'process raft request'  (duration: 279.88091ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-17T00:11:11.692929Z","caller":"traceutil/trace.go:172","msg":"trace[612386911] linearizableReadLoop","detail":"{readStateIndex:962; appliedIndex:963; }","duration":"135.153849ms","start":"2025-09-17T00:11:11.557768Z","end":"2025-09-17T00:11:11.692922Z","steps":["trace[612386911] 'read index received'  (duration: 135.151573ms)","trace[612386911] 'applied index is now lower than readState.Index'  (duration: 1.858µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-17T00:11:11.693006Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"135.2468ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-17T00:11:11.693052Z","caller":"traceutil/trace.go:172","msg":"trace[1381131886] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:881; }","duration":"135.295528ms","start":"2025-09-17T00:11:11.557743Z","end":"2025-09-17T00:11:11.693039Z","steps":["trace[1381131886] 'agreement among raft nodes before linearized reading'  (duration: 135.226863ms)"],"step_count":1}
	
	
	==> etcd [aa50634c06db9a61f344a7426395967b47f5fe13c1ca9ac1b2b48952c05614b5] <==
	{"level":"warn","ts":"2025-09-17T00:09:44.037616Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:09:44.045030Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:09:44.078425Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:09:44.098759Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:09:44.121749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:09:44.139035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:09:44.190735Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50952","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-17T00:10:11.305133Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-17T00:10:11.305264Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-456067","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.44:2380"],"advertise-client-urls":["https://192.168.50.44:2379"]}
	{"level":"error","ts":"2025-09-17T00:10:11.305385Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-17T00:10:11.402959Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-17T00:10:11.403064Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-17T00:10:11.403088Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"bc0cc19ab3a6560f","current-leader-member-id":"bc0cc19ab3a6560f"}
	{"level":"info","ts":"2025-09-17T00:10:11.403184Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-09-17T00:10:11.403193Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-09-17T00:10:11.403331Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.44:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-17T00:10:11.403431Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.44:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-17T00:10:11.403438Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.50.44:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-09-17T00:10:11.403481Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-17T00:10:11.403487Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-17T00:10:11.403492Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-17T00:10:11.407290Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.50.44:2380"}
	{"level":"error","ts":"2025-09-17T00:10:11.407342Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.50.44:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-17T00:10:11.407365Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.50.44:2380"}
	{"level":"info","ts":"2025-09-17T00:10:11.407371Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-456067","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.44:2380"],"advertise-client-urls":["https://192.168.50.44:2379"]}
	
	
	==> kernel <==
	 00:17:02 up 9 min,  0 users,  load average: 0.14, 0.39, 0.28
	Linux functional-456067 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Sep  9 02:24:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [c8f3af172c0bd637e14426a9b5fdc5cbbc23deceab06225df5426aeb77e9a8f2] <==
	I0917 00:10:27.690885       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0917 00:10:29.356191       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0917 00:10:29.605142       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0917 00:10:29.704520       1 controller.go:667] quota admission added evaluator for: endpoints
	I0917 00:10:45.271376       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.101.125.7"}
	I0917 00:10:50.258242       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.111.110.172"}
	I0917 00:10:53.259901       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.97.160.219"}
	I0917 00:11:05.999938       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.111.240.195"}
	E0917 00:11:09.562004       1 conn.go:339] Error on socket receive: read tcp 192.168.50.44:8441->192.168.50.1:38920: use of closed network connection
	E0917 00:11:10.287837       1 conn.go:339] Error on socket receive: read tcp 192.168.50.44:8441->192.168.50.1:60234: use of closed network connection
	E0917 00:11:12.042476       1 conn.go:339] Error on socket receive: read tcp 192.168.50.44:8441->192.168.50.1:60246: use of closed network connection
	E0917 00:11:13.948158       1 conn.go:339] Error on socket receive: read tcp 192.168.50.44:8441->192.168.50.1:60302: use of closed network connection
	I0917 00:11:14.993124       1 controller.go:667] quota admission added evaluator for: namespaces
	I0917 00:11:15.382466       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.8.104"}
	I0917 00:11:15.412979       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.1.170"}
	I0917 00:11:27.244153       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:11:51.085595       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:12:41.435242       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:13:09.256735       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:14:06.495621       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:14:19.155645       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:15:10.478859       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:15:30.772670       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:16:32.672750       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:16:57.142641       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [4c0cc6452fe55a4fc2cb24900ba6a1bc794c734b614e67a42c3f8228f2947ca3] <==
	I0917 00:09:48.287652       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0917 00:09:48.287791       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0917 00:09:48.289497       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0917 00:09:48.289625       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0917 00:09:48.297255       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0917 00:09:48.297358       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0917 00:09:48.299593       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0917 00:09:48.301454       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0917 00:09:48.301522       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0917 00:09:48.301657       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0917 00:09:48.304284       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0917 00:09:48.305418       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0917 00:09:48.307627       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0917 00:09:48.309973       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0917 00:09:48.311112       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0917 00:09:48.311131       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0917 00:09:48.314353       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I0917 00:09:48.317664       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0917 00:09:48.321909       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0917 00:09:48.324505       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0917 00:09:48.336441       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0917 00:09:48.337777       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0917 00:09:48.339021       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0917 00:09:48.340404       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0917 00:09:48.348324       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	
	
	==> kube-controller-manager [c2a20360f0aee1a4da4cc6a16e88ac133d2bde5bba76901442ce606920448fe8] <==
	I0917 00:10:29.315849       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0917 00:10:29.317910       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0917 00:10:29.319068       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0917 00:10:29.321862       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0917 00:10:29.332192       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0917 00:10:29.333378       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0917 00:10:29.338749       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0917 00:10:29.345208       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0917 00:10:29.345270       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0917 00:10:29.345289       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0917 00:10:29.351142       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0917 00:10:29.351150       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0917 00:10:29.353671       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0917 00:10:29.353801       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0917 00:10:29.353891       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-456067"
	I0917 00:10:29.354141       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0917 00:10:29.354199       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	E0917 00:11:15.122641       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0917 00:11:15.124374       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0917 00:11:15.139263       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0917 00:11:15.145292       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0917 00:11:15.152494       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0917 00:11:15.154701       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0917 00:11:15.165120       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0917 00:11:15.173324       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [009cc8f40ca65b9b9d1844c8cec12e3aa639d4fa6159eb2b8c08cdaf9e366923] <==
	I0917 00:09:46.079350       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0917 00:09:46.179494       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0917 00:09:46.179577       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.50.44"]
	E0917 00:09:46.179643       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 00:09:46.238380       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0917 00:09:46.238530       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0917 00:09:46.238662       1 server_linux.go:132] "Using iptables Proxier"
	I0917 00:09:46.253782       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 00:09:46.254800       1 server.go:527] "Version info" version="v1.34.0"
	I0917 00:09:46.254835       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 00:09:46.260073       1 config.go:200] "Starting service config controller"
	I0917 00:09:46.260146       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0917 00:09:46.260174       1 config.go:106] "Starting endpoint slice config controller"
	I0917 00:09:46.260193       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0917 00:09:46.260234       1 config.go:403] "Starting serviceCIDR config controller"
	I0917 00:09:46.260252       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0917 00:09:46.262321       1 config.go:309] "Starting node config controller"
	I0917 00:09:46.263485       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0917 00:09:46.265394       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0917 00:09:46.360370       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0917 00:09:46.360415       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0917 00:09:46.360434       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [4a9f839ba278afe3c20abe1de686c7602b2a3be69877fbb9aa07bfe28a5c2d79] <==
	I0917 00:10:27.103339       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0917 00:10:27.204343       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0917 00:10:27.204394       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.50.44"]
	E0917 00:10:27.204458       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 00:10:27.245359       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0917 00:10:27.245460       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0917 00:10:27.245646       1 server_linux.go:132] "Using iptables Proxier"
	I0917 00:10:27.256366       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 00:10:27.256760       1 server.go:527] "Version info" version="v1.34.0"
	I0917 00:10:27.256794       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 00:10:27.261975       1 config.go:309] "Starting node config controller"
	I0917 00:10:27.262007       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0917 00:10:27.262013       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0917 00:10:27.262158       1 config.go:403] "Starting serviceCIDR config controller"
	I0917 00:10:27.262184       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0917 00:10:27.262279       1 config.go:200] "Starting service config controller"
	I0917 00:10:27.262283       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0917 00:10:27.262307       1 config.go:106] "Starting endpoint slice config controller"
	I0917 00:10:27.262329       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0917 00:10:27.362659       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0917 00:10:27.362738       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0917 00:10:27.362749       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [c82c77796405ca65fc28e9767c340158a833571d00f641613fd2d58bdd14c622] <==
	I0917 00:10:23.915888       1 serving.go:386] Generated self-signed cert in-memory
	I0917 00:10:26.171795       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0917 00:10:26.171898       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 00:10:26.178503       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0917 00:10:26.178652       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I0917 00:10:26.178716       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 00:10:26.178740       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 00:10:26.178762       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0917 00:10:26.178779       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0917 00:10:26.179239       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0917 00:10:26.179446       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0917 00:10:26.279024       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0917 00:10:26.279094       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I0917 00:10:26.279194       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [f96fd4e7775ff38539b28775afa2831a2e91c348e198c6eff2c89008e7335c0b] <==
	I0917 00:09:43.237162       1 serving.go:386] Generated self-signed cert in-memory
	I0917 00:09:45.029707       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0917 00:09:45.029746       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 00:09:45.035445       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0917 00:09:45.035524       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I0917 00:09:45.035598       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 00:09:45.035605       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 00:09:45.035616       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0917 00:09:45.035622       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0917 00:09:45.035833       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0917 00:09:45.035904       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0917 00:09:45.135740       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0917 00:09:45.135844       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I0917 00:09:45.135926       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 00:10:11.325428       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0917 00:10:11.325486       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0917 00:10:11.325516       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0917 00:10:11.341764       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 00:10:11.342692       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I0917 00:10:11.342725       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0917 00:10:11.342952       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0917 00:10:11.346457       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 17 00:16:12 functional-456067 kubelet[6532]: E0917 00:16:12.935725    6532 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-fkpgc" podUID="a6cc2546-7037-4071-810f-d239693c428b"
	Sep 17 00:16:20 functional-456067 kubelet[6532]: E0917 00:16:20.305413    6532 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="bf17def4-4e0f-4ae8-a19c-3925f04e81e4"
	Sep 17 00:16:22 functional-456067 kubelet[6532]: E0917 00:16:22.390524    6532 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod774929cd9e929910c78f51089f6ce784/crio-c86c3dbf05534f7f2de19b239bb40870b1d98e0029133a337706a3c510caff82: Error finding container c86c3dbf05534f7f2de19b239bb40870b1d98e0029133a337706a3c510caff82: Status 404 returned error can't find the container with id c86c3dbf05534f7f2de19b239bb40870b1d98e0029133a337706a3c510caff82
	Sep 17 00:16:22 functional-456067 kubelet[6532]: E0917 00:16:22.391294    6532 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod9f08a5f4e563d186ce013dd3e014ba54/crio-379010ae968c7e55fc6fb81cec62140f4de438ac3704d3384b918ec846207ee0: Error finding container 379010ae968c7e55fc6fb81cec62140f4de438ac3704d3384b918ec846207ee0: Status 404 returned error can't find the container with id 379010ae968c7e55fc6fb81cec62140f4de438ac3704d3384b918ec846207ee0
	Sep 17 00:16:22 functional-456067 kubelet[6532]: E0917 00:16:22.391673    6532 manager.go:1116] Failed to create existing container: /kubepods/besteffort/podcebfaed2-8cab-4dd0-8ca6-089cfabdc70e/crio-f503e182fbce8ab3c9eb489d5c616758ff269cdbb35bfc55e4ff40183a286937: Error finding container f503e182fbce8ab3c9eb489d5c616758ff269cdbb35bfc55e4ff40183a286937: Status 404 returned error can't find the container with id f503e182fbce8ab3c9eb489d5c616758ff269cdbb35bfc55e4ff40183a286937
	Sep 17 00:16:22 functional-456067 kubelet[6532]: E0917 00:16:22.392143    6532 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod2893852c-182d-4f0b-adc7-6cf85183f756/crio-1e23269daf1c84e92ef48de5aa4f28fb4912cd5a1aa31dd416695ada1f606ad7: Error finding container 1e23269daf1c84e92ef48de5aa4f28fb4912cd5a1aa31dd416695ada1f606ad7: Status 404 returned error can't find the container with id 1e23269daf1c84e92ef48de5aa4f28fb4912cd5a1aa31dd416695ada1f606ad7
	Sep 17 00:16:22 functional-456067 kubelet[6532]: E0917 00:16:22.392476    6532 manager.go:1116] Failed to create existing container: /kubepods/burstable/pode5c865c4ef5aadbdf17a9a89be8d577f/crio-4a8d05ebc9a83fe10215287312c3c3b0b90ea010413ffe9a8e88ac3e16f6cdcb: Error finding container 4a8d05ebc9a83fe10215287312c3c3b0b90ea010413ffe9a8e88ac3e16f6cdcb: Status 404 returned error can't find the container with id 4a8d05ebc9a83fe10215287312c3c3b0b90ea010413ffe9a8e88ac3e16f6cdcb
	Sep 17 00:16:22 functional-456067 kubelet[6532]: E0917 00:16:22.392911    6532 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod698b133d-2da9-43ed-b8ce-879f34603003/crio-1bfab32a2850088645750281210cfc6b4b54dc75234593f672b57682ea2d33cf: Error finding container 1bfab32a2850088645750281210cfc6b4b54dc75234593f672b57682ea2d33cf: Status 404 returned error can't find the container with id 1bfab32a2850088645750281210cfc6b4b54dc75234593f672b57682ea2d33cf
	Sep 17 00:16:22 functional-456067 kubelet[6532]: E0917 00:16:22.594506    6532 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758068182594033963  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:222508}  inodes_used:{value:110}}"
	Sep 17 00:16:22 functional-456067 kubelet[6532]: E0917 00:16:22.594594    6532 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758068182594033963  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:222508}  inodes_used:{value:110}}"
	Sep 17 00:16:26 functional-456067 kubelet[6532]: E0917 00:16:26.304659    6532 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-fkpgc" podUID="a6cc2546-7037-4071-810f-d239693c428b"
	Sep 17 00:16:32 functional-456067 kubelet[6532]: E0917 00:16:32.597179    6532 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758068192596433995  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:222508}  inodes_used:{value:110}}"
	Sep 17 00:16:32 functional-456067 kubelet[6532]: E0917 00:16:32.597209    6532 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758068192596433995  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:222508}  inodes_used:{value:110}}"
	Sep 17 00:16:41 functional-456067 kubelet[6532]: E0917 00:16:41.304932    6532 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-fkpgc" podUID="a6cc2546-7037-4071-810f-d239693c428b"
	Sep 17 00:16:42 functional-456067 kubelet[6532]: E0917 00:16:42.599465    6532 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758068202599101943  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:222508}  inodes_used:{value:110}}"
	Sep 17 00:16:42 functional-456067 kubelet[6532]: E0917 00:16:42.599516    6532 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758068202599101943  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:222508}  inodes_used:{value:110}}"
	Sep 17 00:16:43 functional-456067 kubelet[6532]: E0917 00:16:43.605176    6532 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 17 00:16:43 functional-456067 kubelet[6532]: E0917 00:16:43.605225    6532 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 17 00:16:43 functional-456067 kubelet[6532]: E0917 00:16:43.605449    6532 kuberuntime_manager.go:1449] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-77bf4d6c4c-sgrhm_kubernetes-dashboard(205e2252-3a6b-43c9-92f7-902aaf39be01): ErrImagePull: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 17 00:16:43 functional-456067 kubelet[6532]: E0917 00:16:43.605633    6532 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-sgrhm" podUID="205e2252-3a6b-43c9-92f7-902aaf39be01"
	Sep 17 00:16:52 functional-456067 kubelet[6532]: E0917 00:16:52.603758    6532 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758068212602827040  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:222508}  inodes_used:{value:110}}"
	Sep 17 00:16:52 functional-456067 kubelet[6532]: E0917 00:16:52.603787    6532 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758068212602827040  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:222508}  inodes_used:{value:110}}"
	Sep 17 00:16:57 functional-456067 kubelet[6532]: E0917 00:16:57.306299    6532 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-sgrhm" podUID="205e2252-3a6b-43c9-92f7-902aaf39be01"
	Sep 17 00:17:02 functional-456067 kubelet[6532]: E0917 00:17:02.606803    6532 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758068222606290974  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:222508}  inodes_used:{value:110}}"
	Sep 17 00:17:02 functional-456067 kubelet[6532]: E0917 00:17:02.606877    6532 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758068222606290974  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:222508}  inodes_used:{value:110}}"
	
	
	==> storage-provisioner [042dc70379de4d844d8f12b9393bba3b57b1e97c822197d216f26e93222192e7] <==
	W0917 00:16:37.564249       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:16:39.568059       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:16:39.577976       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:16:41.582147       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:16:41.587795       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:16:43.591195       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:16:43.600350       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:16:45.604821       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:16:45.611965       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:16:47.615264       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:16:47.621043       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:16:49.623846       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:16:49.632945       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:16:51.637485       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:16:51.643157       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:16:53.646763       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:16:53.652218       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:16:55.655866       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:16:55.665333       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:16:57.668949       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:16:57.677587       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:16:59.682094       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:16:59.689116       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:17:01.692618       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:17:01.699512       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [d0a85c0699434bfc72e545b837eddb3394f35737b3a9f90481867a606e119008] <==
	I0917 00:09:45.969685       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0917 00:09:45.986739       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0917 00:09:45.988132       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0917 00:09:45.996023       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:09:49.454293       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:09:53.714364       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:09:57.313195       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:00.370163       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:03.392392       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:03.398053       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0917 00:10:03.398172       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0917 00:10:03.398294       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-456067_099082b7-9f11-413f-b380-a5a3ba8127d6!
	I0917 00:10:03.398708       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b6c90970-c155-42ec-acaa-d206fa4df074", APIVersion:"v1", ResourceVersion:"646", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-456067_099082b7-9f11-413f-b380-a5a3ba8127d6 became leader
	W0917 00:10:03.402654       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:03.409950       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0917 00:10:03.498462       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-456067_099082b7-9f11-413f-b380-a5a3ba8127d6!
	W0917 00:10:05.413592       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:05.423123       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:07.429339       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:07.439998       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:09.444422       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:10:09.449783       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-456067 -n functional-456067
helpers_test.go:269: (dbg) Run:  kubectl --context functional-456067 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-fkpgc sp-pod dashboard-metrics-scraper-77bf4d6c4c-sgrhm kubernetes-dashboard-855c9754f9-vxn8t
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-456067 describe pod busybox-mount hello-node-75c85bcc94-fkpgc sp-pod dashboard-metrics-scraper-77bf4d6c4c-sgrhm kubernetes-dashboard-855c9754f9-vxn8t
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-456067 describe pod busybox-mount hello-node-75c85bcc94-fkpgc sp-pod dashboard-metrics-scraper-77bf4d6c4c-sgrhm kubernetes-dashboard-855c9754f9-vxn8t: exit status 1 (96.389946ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-456067/192.168.50.44
	Start Time:       Wed, 17 Sep 2025 00:11:14 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  cri-o://929f0819cfce05f107cc586915d914ba997c586fcdb094e4bba5c2ea7660752a
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Wed, 17 Sep 2025 00:12:07 +0000
	      Finished:     Wed, 17 Sep 2025 00:12:07 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-p2558 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-p2558:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  5m48s  default-scheduler  Successfully assigned default/busybox-mount to functional-456067
	  Normal  Pulling    5m48s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     4m56s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.272s (51.835s including waiting). Image size: 4631262 bytes.
	  Normal  Created    4m56s  kubelet            Created container: mount-munger
	  Normal  Started    4m56s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-fkpgc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-456067/192.168.50.44
	Start Time:       Wed, 17 Sep 2025 00:11:05 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:           10.244.0.10
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-m64wp (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-m64wp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  5m57s                default-scheduler  Successfully assigned default/hello-node-75c85bcc94-fkpgc to functional-456067
	  Warning  Failed     51s (x3 over 4m57s)  kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     51s (x3 over 4m57s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    22s (x4 over 4m57s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     22s (x4 over 4m57s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    10s (x4 over 5m57s)  kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-456067/192.168.50.44
	Start Time:       Wed, 17 Sep 2025 00:11:00 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cqjcm (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-cqjcm:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  6m2s                 default-scheduler  Successfully assigned default/sp-pod to functional-456067
	  Warning  Failed     81s (x3 over 5m28s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     81s (x3 over 5m28s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    43s (x5 over 5m27s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     43s (x5 over 5m27s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    30s (x4 over 6m)     kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-sgrhm" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-vxn8t" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-456067 describe pod busybox-mount hello-node-75c85bcc94-fkpgc sp-pod dashboard-metrics-scraper-77bf4d6c4c-sgrhm kubernetes-dashboard-855c9754f9-vxn8t: exit status 1
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (371.62s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-456067 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-456067 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-fkpgc" [a6cc2546-7037-4071-810f-d239693c428b] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-456067 -n functional-456067
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-09-17 00:21:06.299939419 +0000 UTC m=+1378.051452858
functional_test.go:1460: (dbg) Run:  kubectl --context functional-456067 describe po hello-node-75c85bcc94-fkpgc -n default
functional_test.go:1460: (dbg) kubectl --context functional-456067 describe po hello-node-75c85bcc94-fkpgc -n default:
Name:             hello-node-75c85bcc94-fkpgc
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-456067/192.168.50.44
Start Time:       Wed, 17 Sep 2025 00:11:05 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.10
IPs:
IP:           10.244.0.10
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-m64wp (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-m64wp:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  10m                default-scheduler  Successfully assigned default/hello-node-75c85bcc94-fkpgc to functional-456067
Normal   Pulling    77s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     47s (x5 over 9m)   kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     47s (x5 over 9m)   kubelet            Error: ErrImagePull
Normal   BackOff    6s (x13 over 9m)   kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     6s (x13 over 9m)   kubelet            Error: ImagePullBackOff
functional_test.go:1460: (dbg) Run:  kubectl --context functional-456067 logs hello-node-75c85bcc94-fkpgc -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-456067 logs hello-node-75c85bcc94-fkpgc -n default: exit status 1 (75.704199ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-fkpgc" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-456067 logs hello-node-75c85bcc94-fkpgc -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-456067 service --namespace=default --https --url hello-node: exit status 115 (312.294466ms)

                                                
                                                
-- stdout --
	https://192.168.50.44:30217
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-456067 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-456067 service hello-node --url --format={{.IP}}: exit status 115 (307.819452ms)

                                                
                                                
-- stdout --
	192.168.50.44
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-456067 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-456067 service hello-node --url: exit status 115 (307.564329ms)

                                                
                                                
-- stdout --
	http://192.168.50.44:30217
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-456067 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.50.44:30217
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.31s)

                                                
                                    
x
+
TestPreload (163.49s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-959742 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.32.0
E0917 01:00:50.331096  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/functional-456067/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 01:01:53.406036  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/addons-772113/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-959742 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.32.0: (1m37.315816137s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-959742 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-959742 image pull gcr.io/k8s-minikube/busybox: (1.721671248s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-959742
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-959742: (7.039989873s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-959742 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-959742 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (54.287369416s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-959742 image list
preload_test.go:75: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10
	registry.k8s.io/kube-scheduler:v1.32.0
	registry.k8s.io/kube-proxy:v1.32.0
	registry.k8s.io/kube-controller-manager:v1.32.0
	registry.k8s.io/kube-apiserver:v1.32.0
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20241108-5c6d2daf

                                                
                                                
-- /stdout --
panic.go:636: *** TestPreload FAILED at 2025-09-17 01:03:25.491582906 +0000 UTC m=+3917.243096358
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-959742 -n test-preload-959742
helpers_test.go:252: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-959742 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p test-preload-959742 logs -n 25: (1.225156495s)
helpers_test.go:260: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                        ARGS                                                                                         │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-989933 ssh -n multinode-989933-m03 sudo cat /home/docker/cp-test.txt                                                                                                      │ multinode-989933     │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ ssh     │ multinode-989933 ssh -n multinode-989933 sudo cat /home/docker/cp-test_multinode-989933-m03_multinode-989933.txt                                                                    │ multinode-989933     │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ cp      │ multinode-989933 cp multinode-989933-m03:/home/docker/cp-test.txt multinode-989933-m02:/home/docker/cp-test_multinode-989933-m03_multinode-989933-m02.txt                           │ multinode-989933     │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ ssh     │ multinode-989933 ssh -n multinode-989933-m03 sudo cat /home/docker/cp-test.txt                                                                                                      │ multinode-989933     │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ ssh     │ multinode-989933 ssh -n multinode-989933-m02 sudo cat /home/docker/cp-test_multinode-989933-m03_multinode-989933-m02.txt                                                            │ multinode-989933     │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ node    │ multinode-989933 node stop m03                                                                                                                                                      │ multinode-989933     │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:49 UTC │
	│ node    │ multinode-989933 node start m03 -v=5 --alsologtostderr                                                                                                                              │ multinode-989933     │ jenkins │ v1.37.0 │ 17 Sep 25 00:49 UTC │ 17 Sep 25 00:50 UTC │
	│ node    │ list -p multinode-989933                                                                                                                                                            │ multinode-989933     │ jenkins │ v1.37.0 │ 17 Sep 25 00:50 UTC │                     │
	│ stop    │ -p multinode-989933                                                                                                                                                                 │ multinode-989933     │ jenkins │ v1.37.0 │ 17 Sep 25 00:50 UTC │ 17 Sep 25 00:53 UTC │
	│ start   │ -p multinode-989933 --wait=true -v=5 --alsologtostderr                                                                                                                              │ multinode-989933     │ jenkins │ v1.37.0 │ 17 Sep 25 00:53 UTC │ 17 Sep 25 00:55 UTC │
	│ node    │ list -p multinode-989933                                                                                                                                                            │ multinode-989933     │ jenkins │ v1.37.0 │ 17 Sep 25 00:55 UTC │                     │
	│ node    │ multinode-989933 node delete m03                                                                                                                                                    │ multinode-989933     │ jenkins │ v1.37.0 │ 17 Sep 25 00:55 UTC │ 17 Sep 25 00:55 UTC │
	│ stop    │ multinode-989933 stop                                                                                                                                                               │ multinode-989933     │ jenkins │ v1.37.0 │ 17 Sep 25 00:55 UTC │ 17 Sep 25 00:58 UTC │
	│ start   │ -p multinode-989933 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                          │ multinode-989933     │ jenkins │ v1.37.0 │ 17 Sep 25 00:58 UTC │ 17 Sep 25 00:59 UTC │
	│ node    │ list -p multinode-989933                                                                                                                                                            │ multinode-989933     │ jenkins │ v1.37.0 │ 17 Sep 25 01:00 UTC │                     │
	│ start   │ -p multinode-989933-m02 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                         │ multinode-989933-m02 │ jenkins │ v1.37.0 │ 17 Sep 25 01:00 UTC │                     │
	│ start   │ -p multinode-989933-m03 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                         │ multinode-989933-m03 │ jenkins │ v1.37.0 │ 17 Sep 25 01:00 UTC │ 17 Sep 25 01:00 UTC │
	│ node    │ add -p multinode-989933                                                                                                                                                             │ multinode-989933     │ jenkins │ v1.37.0 │ 17 Sep 25 01:00 UTC │                     │
	│ delete  │ -p multinode-989933-m03                                                                                                                                                             │ multinode-989933-m03 │ jenkins │ v1.37.0 │ 17 Sep 25 01:00 UTC │ 17 Sep 25 01:00 UTC │
	│ delete  │ -p multinode-989933                                                                                                                                                                 │ multinode-989933     │ jenkins │ v1.37.0 │ 17 Sep 25 01:00 UTC │ 17 Sep 25 01:00 UTC │
	│ start   │ -p test-preload-959742 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.32.0 │ test-preload-959742  │ jenkins │ v1.37.0 │ 17 Sep 25 01:00 UTC │ 17 Sep 25 01:02 UTC │
	│ image   │ test-preload-959742 image pull gcr.io/k8s-minikube/busybox                                                                                                                          │ test-preload-959742  │ jenkins │ v1.37.0 │ 17 Sep 25 01:02 UTC │ 17 Sep 25 01:02 UTC │
	│ stop    │ -p test-preload-959742                                                                                                                                                              │ test-preload-959742  │ jenkins │ v1.37.0 │ 17 Sep 25 01:02 UTC │ 17 Sep 25 01:02 UTC │
	│ start   │ -p test-preload-959742 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                         │ test-preload-959742  │ jenkins │ v1.37.0 │ 17 Sep 25 01:02 UTC │ 17 Sep 25 01:03 UTC │
	│ image   │ test-preload-959742 image list                                                                                                                                                      │ test-preload-959742  │ jenkins │ v1.37.0 │ 17 Sep 25 01:03 UTC │ 17 Sep 25 01:03 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/17 01:02:31
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 01:02:31.021379  178757 out.go:360] Setting OutFile to fd 1 ...
	I0917 01:02:31.021499  178757 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 01:02:31.021507  178757 out.go:374] Setting ErrFile to fd 2...
	I0917 01:02:31.021512  178757 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 01:02:31.021745  178757 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-141589/.minikube/bin
	I0917 01:02:31.022260  178757 out.go:368] Setting JSON to false
	I0917 01:02:31.023145  178757 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":13495,"bootTime":1758057456,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 01:02:31.023257  178757 start.go:140] virtualization: kvm guest
	I0917 01:02:31.025551  178757 out.go:179] * [test-preload-959742] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0917 01:02:31.027119  178757 notify.go:220] Checking for updates...
	I0917 01:02:31.027137  178757 out.go:179]   - MINIKUBE_LOCATION=21550
	I0917 01:02:31.028829  178757 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 01:02:31.030350  178757 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-141589/kubeconfig
	I0917 01:02:31.031790  178757 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-141589/.minikube
	I0917 01:02:31.033151  178757 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 01:02:31.034431  178757 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 01:02:31.036244  178757 config.go:182] Loaded profile config "test-preload-959742": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0917 01:02:31.036669  178757 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 01:02:31.036740  178757 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 01:02:31.051481  178757 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37567
	I0917 01:02:31.052039  178757 main.go:141] libmachine: () Calling .GetVersion
	I0917 01:02:31.052703  178757 main.go:141] libmachine: Using API Version  1
	I0917 01:02:31.052743  178757 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 01:02:31.053164  178757 main.go:141] libmachine: () Calling .GetMachineName
	I0917 01:02:31.053397  178757 main.go:141] libmachine: (test-preload-959742) Calling .DriverName
	I0917 01:02:31.055353  178757 out.go:179] * Kubernetes 1.34.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.0
	I0917 01:02:31.056689  178757 driver.go:421] Setting default libvirt URI to qemu:///system
	I0917 01:02:31.057074  178757 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 01:02:31.057117  178757 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 01:02:31.070810  178757 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45925
	I0917 01:02:31.071367  178757 main.go:141] libmachine: () Calling .GetVersion
	I0917 01:02:31.071978  178757 main.go:141] libmachine: Using API Version  1
	I0917 01:02:31.072019  178757 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 01:02:31.072414  178757 main.go:141] libmachine: () Calling .GetMachineName
	I0917 01:02:31.072633  178757 main.go:141] libmachine: (test-preload-959742) Calling .DriverName
	I0917 01:02:31.108395  178757 out.go:179] * Using the kvm2 driver based on existing profile
	I0917 01:02:31.110043  178757 start.go:304] selected driver: kvm2
	I0917 01:02:31.110065  178757 start.go:918] validating driver "kvm2" against &{Name:test-preload-959742 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.37.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-959742
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.5 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:doc
ker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 01:02:31.110169  178757 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 01:02:31.110995  178757 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 01:02:31.111079  178757 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21550-141589/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0917 01:02:31.126635  178757 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0917 01:02:31.127078  178757 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 01:02:31.127114  178757 cni.go:84] Creating CNI manager for ""
	I0917 01:02:31.127160  178757 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 01:02:31.127228  178757 start.go:348] cluster config:
	{Name:test-preload-959742 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.37.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-959742 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.5 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreD
NSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 01:02:31.127332  178757 iso.go:125] acquiring lock: {Name:mkbc497934aeda3bf1eaa3e96176da91d2f10b30 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 01:02:31.129686  178757 out.go:179] * Starting "test-preload-959742" primary control-plane node in "test-preload-959742" cluster
	I0917 01:02:31.131376  178757 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0917 01:02:31.169570  178757 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I0917 01:02:31.169609  178757 cache.go:58] Caching tarball of preloaded images
	I0917 01:02:31.169782  178757 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0917 01:02:31.171722  178757 out.go:179] * Downloading Kubernetes v1.32.0 preload ...
	I0917 01:02:31.173277  178757 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 ...
	I0917 01:02:31.211024  178757 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2acdb4dde52794f2167c79dcee7507ae -> /home/jenkins/minikube-integration/21550-141589/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I0917 01:02:34.408891  178757 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 ...
	I0917 01:02:34.408991  178757 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/21550-141589/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 ...
	I0917 01:02:35.166403  178757 cache.go:61] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I0917 01:02:35.166563  178757 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/test-preload-959742/config.json ...
	I0917 01:02:35.166829  178757 start.go:360] acquireMachinesLock for test-preload-959742: {Name:mk4898504d31cc722a10b1787754ef8ecd27d0ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 01:02:35.166937  178757 start.go:364] duration metric: took 66.989µs to acquireMachinesLock for "test-preload-959742"
	I0917 01:02:35.166957  178757 start.go:96] Skipping create...Using existing machine configuration
	I0917 01:02:35.166965  178757 fix.go:54] fixHost starting: 
	I0917 01:02:35.167307  178757 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 01:02:35.167359  178757 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 01:02:35.181377  178757 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34025
	I0917 01:02:35.181927  178757 main.go:141] libmachine: () Calling .GetVersion
	I0917 01:02:35.182400  178757 main.go:141] libmachine: Using API Version  1
	I0917 01:02:35.182425  178757 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 01:02:35.182784  178757 main.go:141] libmachine: () Calling .GetMachineName
	I0917 01:02:35.183088  178757 main.go:141] libmachine: (test-preload-959742) Calling .DriverName
	I0917 01:02:35.183316  178757 main.go:141] libmachine: (test-preload-959742) Calling .GetState
	I0917 01:02:35.185300  178757 fix.go:112] recreateIfNeeded on test-preload-959742: state=Stopped err=<nil>
	I0917 01:02:35.185328  178757 main.go:141] libmachine: (test-preload-959742) Calling .DriverName
	W0917 01:02:35.185537  178757 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 01:02:35.188197  178757 out.go:252] * Restarting existing kvm2 VM for "test-preload-959742" ...
	I0917 01:02:35.188230  178757 main.go:141] libmachine: (test-preload-959742) Calling .Start
	I0917 01:02:35.188405  178757 main.go:141] libmachine: (test-preload-959742) starting domain...
	I0917 01:02:35.188427  178757 main.go:141] libmachine: (test-preload-959742) ensuring networks are active...
	I0917 01:02:35.189341  178757 main.go:141] libmachine: (test-preload-959742) Ensuring network default is active
	I0917 01:02:35.189721  178757 main.go:141] libmachine: (test-preload-959742) Ensuring network mk-test-preload-959742 is active
	I0917 01:02:35.190151  178757 main.go:141] libmachine: (test-preload-959742) getting domain XML...
	I0917 01:02:35.191375  178757 main.go:141] libmachine: (test-preload-959742) DBG | starting domain XML:
	I0917 01:02:35.191406  178757 main.go:141] libmachine: (test-preload-959742) DBG | <domain type='kvm'>
	I0917 01:02:35.191418  178757 main.go:141] libmachine: (test-preload-959742) DBG |   <name>test-preload-959742</name>
	I0917 01:02:35.191427  178757 main.go:141] libmachine: (test-preload-959742) DBG |   <uuid>a51b9914-d87d-4f82-bc24-da123b1bba73</uuid>
	I0917 01:02:35.191436  178757 main.go:141] libmachine: (test-preload-959742) DBG |   <memory unit='KiB'>3145728</memory>
	I0917 01:02:35.191448  178757 main.go:141] libmachine: (test-preload-959742) DBG |   <currentMemory unit='KiB'>3145728</currentMemory>
	I0917 01:02:35.191464  178757 main.go:141] libmachine: (test-preload-959742) DBG |   <vcpu placement='static'>2</vcpu>
	I0917 01:02:35.191475  178757 main.go:141] libmachine: (test-preload-959742) DBG |   <os>
	I0917 01:02:35.191486  178757 main.go:141] libmachine: (test-preload-959742) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I0917 01:02:35.191493  178757 main.go:141] libmachine: (test-preload-959742) DBG |     <boot dev='cdrom'/>
	I0917 01:02:35.191503  178757 main.go:141] libmachine: (test-preload-959742) DBG |     <boot dev='hd'/>
	I0917 01:02:35.191511  178757 main.go:141] libmachine: (test-preload-959742) DBG |     <bootmenu enable='no'/>
	I0917 01:02:35.191534  178757 main.go:141] libmachine: (test-preload-959742) DBG |   </os>
	I0917 01:02:35.191548  178757 main.go:141] libmachine: (test-preload-959742) DBG |   <features>
	I0917 01:02:35.191555  178757 main.go:141] libmachine: (test-preload-959742) DBG |     <acpi/>
	I0917 01:02:35.191560  178757 main.go:141] libmachine: (test-preload-959742) DBG |     <apic/>
	I0917 01:02:35.191566  178757 main.go:141] libmachine: (test-preload-959742) DBG |     <pae/>
	I0917 01:02:35.191570  178757 main.go:141] libmachine: (test-preload-959742) DBG |   </features>
	I0917 01:02:35.191582  178757 main.go:141] libmachine: (test-preload-959742) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I0917 01:02:35.191588  178757 main.go:141] libmachine: (test-preload-959742) DBG |   <clock offset='utc'/>
	I0917 01:02:35.191594  178757 main.go:141] libmachine: (test-preload-959742) DBG |   <on_poweroff>destroy</on_poweroff>
	I0917 01:02:35.191598  178757 main.go:141] libmachine: (test-preload-959742) DBG |   <on_reboot>restart</on_reboot>
	I0917 01:02:35.191642  178757 main.go:141] libmachine: (test-preload-959742) DBG |   <on_crash>destroy</on_crash>
	I0917 01:02:35.191663  178757 main.go:141] libmachine: (test-preload-959742) DBG |   <devices>
	I0917 01:02:35.191677  178757 main.go:141] libmachine: (test-preload-959742) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I0917 01:02:35.191697  178757 main.go:141] libmachine: (test-preload-959742) DBG |     <disk type='file' device='cdrom'>
	I0917 01:02:35.191712  178757 main.go:141] libmachine: (test-preload-959742) DBG |       <driver name='qemu' type='raw'/>
	I0917 01:02:35.191725  178757 main.go:141] libmachine: (test-preload-959742) DBG |       <source file='/home/jenkins/minikube-integration/21550-141589/.minikube/machines/test-preload-959742/boot2docker.iso'/>
	I0917 01:02:35.191770  178757 main.go:141] libmachine: (test-preload-959742) DBG |       <target dev='hdc' bus='scsi'/>
	I0917 01:02:35.191798  178757 main.go:141] libmachine: (test-preload-959742) DBG |       <readonly/>
	I0917 01:02:35.191812  178757 main.go:141] libmachine: (test-preload-959742) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I0917 01:02:35.191820  178757 main.go:141] libmachine: (test-preload-959742) DBG |     </disk>
	I0917 01:02:35.191827  178757 main.go:141] libmachine: (test-preload-959742) DBG |     <disk type='file' device='disk'>
	I0917 01:02:35.191834  178757 main.go:141] libmachine: (test-preload-959742) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I0917 01:02:35.191846  178757 main.go:141] libmachine: (test-preload-959742) DBG |       <source file='/home/jenkins/minikube-integration/21550-141589/.minikube/machines/test-preload-959742/test-preload-959742.rawdisk'/>
	I0917 01:02:35.191868  178757 main.go:141] libmachine: (test-preload-959742) DBG |       <target dev='hda' bus='virtio'/>
	I0917 01:02:35.191879  178757 main.go:141] libmachine: (test-preload-959742) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I0917 01:02:35.191892  178757 main.go:141] libmachine: (test-preload-959742) DBG |     </disk>
	I0917 01:02:35.191904  178757 main.go:141] libmachine: (test-preload-959742) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I0917 01:02:35.191918  178757 main.go:141] libmachine: (test-preload-959742) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I0917 01:02:35.191925  178757 main.go:141] libmachine: (test-preload-959742) DBG |     </controller>
	I0917 01:02:35.191931  178757 main.go:141] libmachine: (test-preload-959742) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I0917 01:02:35.191938  178757 main.go:141] libmachine: (test-preload-959742) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I0917 01:02:35.191953  178757 main.go:141] libmachine: (test-preload-959742) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I0917 01:02:35.191966  178757 main.go:141] libmachine: (test-preload-959742) DBG |     </controller>
	I0917 01:02:35.191974  178757 main.go:141] libmachine: (test-preload-959742) DBG |     <interface type='network'>
	I0917 01:02:35.191987  178757 main.go:141] libmachine: (test-preload-959742) DBG |       <mac address='52:54:00:e9:1c:f0'/>
	I0917 01:02:35.191998  178757 main.go:141] libmachine: (test-preload-959742) DBG |       <source network='mk-test-preload-959742'/>
	I0917 01:02:35.192010  178757 main.go:141] libmachine: (test-preload-959742) DBG |       <model type='virtio'/>
	I0917 01:02:35.192031  178757 main.go:141] libmachine: (test-preload-959742) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I0917 01:02:35.192044  178757 main.go:141] libmachine: (test-preload-959742) DBG |     </interface>
	I0917 01:02:35.192055  178757 main.go:141] libmachine: (test-preload-959742) DBG |     <interface type='network'>
	I0917 01:02:35.192065  178757 main.go:141] libmachine: (test-preload-959742) DBG |       <mac address='52:54:00:8d:16:6d'/>
	I0917 01:02:35.192076  178757 main.go:141] libmachine: (test-preload-959742) DBG |       <source network='default'/>
	I0917 01:02:35.192090  178757 main.go:141] libmachine: (test-preload-959742) DBG |       <model type='virtio'/>
	I0917 01:02:35.192105  178757 main.go:141] libmachine: (test-preload-959742) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I0917 01:02:35.192117  178757 main.go:141] libmachine: (test-preload-959742) DBG |     </interface>
	I0917 01:02:35.192129  178757 main.go:141] libmachine: (test-preload-959742) DBG |     <serial type='pty'>
	I0917 01:02:35.192142  178757 main.go:141] libmachine: (test-preload-959742) DBG |       <target type='isa-serial' port='0'>
	I0917 01:02:35.192150  178757 main.go:141] libmachine: (test-preload-959742) DBG |         <model name='isa-serial'/>
	I0917 01:02:35.192177  178757 main.go:141] libmachine: (test-preload-959742) DBG |       </target>
	I0917 01:02:35.192186  178757 main.go:141] libmachine: (test-preload-959742) DBG |     </serial>
	I0917 01:02:35.192203  178757 main.go:141] libmachine: (test-preload-959742) DBG |     <console type='pty'>
	I0917 01:02:35.192216  178757 main.go:141] libmachine: (test-preload-959742) DBG |       <target type='serial' port='0'/>
	I0917 01:02:35.192227  178757 main.go:141] libmachine: (test-preload-959742) DBG |     </console>
	I0917 01:02:35.192236  178757 main.go:141] libmachine: (test-preload-959742) DBG |     <input type='mouse' bus='ps2'/>
	I0917 01:02:35.192251  178757 main.go:141] libmachine: (test-preload-959742) DBG |     <input type='keyboard' bus='ps2'/>
	I0917 01:02:35.192263  178757 main.go:141] libmachine: (test-preload-959742) DBG |     <audio id='1' type='none'/>
	I0917 01:02:35.192271  178757 main.go:141] libmachine: (test-preload-959742) DBG |     <memballoon model='virtio'>
	I0917 01:02:35.192277  178757 main.go:141] libmachine: (test-preload-959742) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I0917 01:02:35.192284  178757 main.go:141] libmachine: (test-preload-959742) DBG |     </memballoon>
	I0917 01:02:35.192292  178757 main.go:141] libmachine: (test-preload-959742) DBG |     <rng model='virtio'>
	I0917 01:02:35.192309  178757 main.go:141] libmachine: (test-preload-959742) DBG |       <backend model='random'>/dev/random</backend>
	I0917 01:02:35.192335  178757 main.go:141] libmachine: (test-preload-959742) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I0917 01:02:35.192353  178757 main.go:141] libmachine: (test-preload-959742) DBG |     </rng>
	I0917 01:02:35.192366  178757 main.go:141] libmachine: (test-preload-959742) DBG |   </devices>
	I0917 01:02:35.192379  178757 main.go:141] libmachine: (test-preload-959742) DBG | </domain>
	I0917 01:02:35.192394  178757 main.go:141] libmachine: (test-preload-959742) DBG | 
	I0917 01:02:36.495947  178757 main.go:141] libmachine: (test-preload-959742) waiting for domain to start...
	I0917 01:02:36.497413  178757 main.go:141] libmachine: (test-preload-959742) domain is now running
	I0917 01:02:36.497435  178757 main.go:141] libmachine: (test-preload-959742) waiting for IP...
	I0917 01:02:36.498261  178757 main.go:141] libmachine: (test-preload-959742) DBG | domain test-preload-959742 has defined MAC address 52:54:00:e9:1c:f0 in network mk-test-preload-959742
	I0917 01:02:36.499058  178757 main.go:141] libmachine: (test-preload-959742) DBG | domain test-preload-959742 has current primary IP address 192.168.50.5 and MAC address 52:54:00:e9:1c:f0 in network mk-test-preload-959742
	I0917 01:02:36.499083  178757 main.go:141] libmachine: (test-preload-959742) found domain IP: 192.168.50.5
	I0917 01:02:36.499094  178757 main.go:141] libmachine: (test-preload-959742) reserving static IP address...
	I0917 01:02:36.499639  178757 main.go:141] libmachine: (test-preload-959742) DBG | found host DHCP lease matching {name: "test-preload-959742", mac: "52:54:00:e9:1c:f0", ip: "192.168.50.5"} in network mk-test-preload-959742: {Iface:virbr2 ExpiryTime:2025-09-17 02:01:00 +0000 UTC Type:0 Mac:52:54:00:e9:1c:f0 Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:test-preload-959742 Clientid:01:52:54:00:e9:1c:f0}
	I0917 01:02:36.499675  178757 main.go:141] libmachine: (test-preload-959742) DBG | skip adding static IP to network mk-test-preload-959742 - found existing host DHCP lease matching {name: "test-preload-959742", mac: "52:54:00:e9:1c:f0", ip: "192.168.50.5"}
	I0917 01:02:36.499702  178757 main.go:141] libmachine: (test-preload-959742) reserved static IP address 192.168.50.5 for domain test-preload-959742
	I0917 01:02:36.499721  178757 main.go:141] libmachine: (test-preload-959742) DBG | Getting to WaitForSSH function...
	I0917 01:02:36.499736  178757 main.go:141] libmachine: (test-preload-959742) waiting for SSH...
	I0917 01:02:36.502181  178757 main.go:141] libmachine: (test-preload-959742) DBG | domain test-preload-959742 has defined MAC address 52:54:00:e9:1c:f0 in network mk-test-preload-959742
	I0917 01:02:36.502755  178757 main.go:141] libmachine: (test-preload-959742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:1c:f0", ip: ""} in network mk-test-preload-959742: {Iface:virbr2 ExpiryTime:2025-09-17 02:01:00 +0000 UTC Type:0 Mac:52:54:00:e9:1c:f0 Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:test-preload-959742 Clientid:01:52:54:00:e9:1c:f0}
	I0917 01:02:36.502783  178757 main.go:141] libmachine: (test-preload-959742) DBG | domain test-preload-959742 has defined IP address 192.168.50.5 and MAC address 52:54:00:e9:1c:f0 in network mk-test-preload-959742
	I0917 01:02:36.503006  178757 main.go:141] libmachine: (test-preload-959742) DBG | Using SSH client type: external
	I0917 01:02:36.503032  178757 main.go:141] libmachine: (test-preload-959742) DBG | Using SSH private key: /home/jenkins/minikube-integration/21550-141589/.minikube/machines/test-preload-959742/id_rsa (-rw-------)
	I0917 01:02:36.503064  178757 main.go:141] libmachine: (test-preload-959742) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.5 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21550-141589/.minikube/machines/test-preload-959742/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0917 01:02:36.503074  178757 main.go:141] libmachine: (test-preload-959742) DBG | About to run SSH command:
	I0917 01:02:36.503093  178757 main.go:141] libmachine: (test-preload-959742) DBG | exit 0
	I0917 01:02:47.755729  178757 main.go:141] libmachine: (test-preload-959742) DBG | SSH cmd err, output: exit status 255: 
	I0917 01:02:47.755760  178757 main.go:141] libmachine: (test-preload-959742) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0917 01:02:47.755770  178757 main.go:141] libmachine: (test-preload-959742) DBG | command : exit 0
	I0917 01:02:47.755776  178757 main.go:141] libmachine: (test-preload-959742) DBG | err     : exit status 255
	I0917 01:02:47.755793  178757 main.go:141] libmachine: (test-preload-959742) DBG | output  : 
	I0917 01:02:50.756590  178757 main.go:141] libmachine: (test-preload-959742) DBG | Getting to WaitForSSH function...
	I0917 01:02:50.759923  178757 main.go:141] libmachine: (test-preload-959742) DBG | domain test-preload-959742 has defined MAC address 52:54:00:e9:1c:f0 in network mk-test-preload-959742
	I0917 01:02:50.760391  178757 main.go:141] libmachine: (test-preload-959742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:1c:f0", ip: ""} in network mk-test-preload-959742: {Iface:virbr2 ExpiryTime:2025-09-17 02:02:47 +0000 UTC Type:0 Mac:52:54:00:e9:1c:f0 Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:test-preload-959742 Clientid:01:52:54:00:e9:1c:f0}
	I0917 01:02:50.760432  178757 main.go:141] libmachine: (test-preload-959742) DBG | domain test-preload-959742 has defined IP address 192.168.50.5 and MAC address 52:54:00:e9:1c:f0 in network mk-test-preload-959742
	I0917 01:02:50.760522  178757 main.go:141] libmachine: (test-preload-959742) DBG | Using SSH client type: external
	I0917 01:02:50.760577  178757 main.go:141] libmachine: (test-preload-959742) DBG | Using SSH private key: /home/jenkins/minikube-integration/21550-141589/.minikube/machines/test-preload-959742/id_rsa (-rw-------)
	I0917 01:02:50.760601  178757 main.go:141] libmachine: (test-preload-959742) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.5 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21550-141589/.minikube/machines/test-preload-959742/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0917 01:02:50.760643  178757 main.go:141] libmachine: (test-preload-959742) DBG | About to run SSH command:
	I0917 01:02:50.760658  178757 main.go:141] libmachine: (test-preload-959742) DBG | exit 0
	I0917 01:02:50.892130  178757 main.go:141] libmachine: (test-preload-959742) DBG | SSH cmd err, output: <nil>: 
	I0917 01:02:50.892658  178757 main.go:141] libmachine: (test-preload-959742) Calling .GetConfigRaw
	I0917 01:02:50.893428  178757 main.go:141] libmachine: (test-preload-959742) Calling .GetIP
	I0917 01:02:50.896529  178757 main.go:141] libmachine: (test-preload-959742) DBG | domain test-preload-959742 has defined MAC address 52:54:00:e9:1c:f0 in network mk-test-preload-959742
	I0917 01:02:50.896869  178757 main.go:141] libmachine: (test-preload-959742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:1c:f0", ip: ""} in network mk-test-preload-959742: {Iface:virbr2 ExpiryTime:2025-09-17 02:02:47 +0000 UTC Type:0 Mac:52:54:00:e9:1c:f0 Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:test-preload-959742 Clientid:01:52:54:00:e9:1c:f0}
	I0917 01:02:50.896903  178757 main.go:141] libmachine: (test-preload-959742) DBG | domain test-preload-959742 has defined IP address 192.168.50.5 and MAC address 52:54:00:e9:1c:f0 in network mk-test-preload-959742
	I0917 01:02:50.897216  178757 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/test-preload-959742/config.json ...
	I0917 01:02:50.897469  178757 machine.go:93] provisionDockerMachine start ...
	I0917 01:02:50.897490  178757 main.go:141] libmachine: (test-preload-959742) Calling .DriverName
	I0917 01:02:50.897747  178757 main.go:141] libmachine: (test-preload-959742) Calling .GetSSHHostname
	I0917 01:02:50.900545  178757 main.go:141] libmachine: (test-preload-959742) DBG | domain test-preload-959742 has defined MAC address 52:54:00:e9:1c:f0 in network mk-test-preload-959742
	I0917 01:02:50.901032  178757 main.go:141] libmachine: (test-preload-959742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:1c:f0", ip: ""} in network mk-test-preload-959742: {Iface:virbr2 ExpiryTime:2025-09-17 02:02:47 +0000 UTC Type:0 Mac:52:54:00:e9:1c:f0 Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:test-preload-959742 Clientid:01:52:54:00:e9:1c:f0}
	I0917 01:02:50.901060  178757 main.go:141] libmachine: (test-preload-959742) DBG | domain test-preload-959742 has defined IP address 192.168.50.5 and MAC address 52:54:00:e9:1c:f0 in network mk-test-preload-959742
	I0917 01:02:50.901272  178757 main.go:141] libmachine: (test-preload-959742) Calling .GetSSHPort
	I0917 01:02:50.901474  178757 main.go:141] libmachine: (test-preload-959742) Calling .GetSSHKeyPath
	I0917 01:02:50.901632  178757 main.go:141] libmachine: (test-preload-959742) Calling .GetSSHKeyPath
	I0917 01:02:50.901803  178757 main.go:141] libmachine: (test-preload-959742) Calling .GetSSHUsername
	I0917 01:02:50.901970  178757 main.go:141] libmachine: Using SSH client type: native
	I0917 01:02:50.902311  178757 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.50.5 22 <nil> <nil>}
	I0917 01:02:50.902327  178757 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 01:02:51.011981  178757 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0917 01:02:51.012016  178757 main.go:141] libmachine: (test-preload-959742) Calling .GetMachineName
	I0917 01:02:51.012321  178757 buildroot.go:166] provisioning hostname "test-preload-959742"
	I0917 01:02:51.012352  178757 main.go:141] libmachine: (test-preload-959742) Calling .GetMachineName
	I0917 01:02:51.012612  178757 main.go:141] libmachine: (test-preload-959742) Calling .GetSSHHostname
	I0917 01:02:51.015821  178757 main.go:141] libmachine: (test-preload-959742) DBG | domain test-preload-959742 has defined MAC address 52:54:00:e9:1c:f0 in network mk-test-preload-959742
	I0917 01:02:51.016188  178757 main.go:141] libmachine: (test-preload-959742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:1c:f0", ip: ""} in network mk-test-preload-959742: {Iface:virbr2 ExpiryTime:2025-09-17 02:02:47 +0000 UTC Type:0 Mac:52:54:00:e9:1c:f0 Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:test-preload-959742 Clientid:01:52:54:00:e9:1c:f0}
	I0917 01:02:51.016230  178757 main.go:141] libmachine: (test-preload-959742) DBG | domain test-preload-959742 has defined IP address 192.168.50.5 and MAC address 52:54:00:e9:1c:f0 in network mk-test-preload-959742
	I0917 01:02:51.016522  178757 main.go:141] libmachine: (test-preload-959742) Calling .GetSSHPort
	I0917 01:02:51.016751  178757 main.go:141] libmachine: (test-preload-959742) Calling .GetSSHKeyPath
	I0917 01:02:51.016959  178757 main.go:141] libmachine: (test-preload-959742) Calling .GetSSHKeyPath
	I0917 01:02:51.017101  178757 main.go:141] libmachine: (test-preload-959742) Calling .GetSSHUsername
	I0917 01:02:51.017273  178757 main.go:141] libmachine: Using SSH client type: native
	I0917 01:02:51.017476  178757 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.50.5 22 <nil> <nil>}
	I0917 01:02:51.017488  178757 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-959742 && echo "test-preload-959742" | sudo tee /etc/hostname
	I0917 01:02:51.143346  178757 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-959742
	
	I0917 01:02:51.143376  178757 main.go:141] libmachine: (test-preload-959742) Calling .GetSSHHostname
	I0917 01:02:51.146769  178757 main.go:141] libmachine: (test-preload-959742) DBG | domain test-preload-959742 has defined MAC address 52:54:00:e9:1c:f0 in network mk-test-preload-959742
	I0917 01:02:51.147100  178757 main.go:141] libmachine: (test-preload-959742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:1c:f0", ip: ""} in network mk-test-preload-959742: {Iface:virbr2 ExpiryTime:2025-09-17 02:02:47 +0000 UTC Type:0 Mac:52:54:00:e9:1c:f0 Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:test-preload-959742 Clientid:01:52:54:00:e9:1c:f0}
	I0917 01:02:51.147146  178757 main.go:141] libmachine: (test-preload-959742) DBG | domain test-preload-959742 has defined IP address 192.168.50.5 and MAC address 52:54:00:e9:1c:f0 in network mk-test-preload-959742
	I0917 01:02:51.147302  178757 main.go:141] libmachine: (test-preload-959742) Calling .GetSSHPort
	I0917 01:02:51.147564  178757 main.go:141] libmachine: (test-preload-959742) Calling .GetSSHKeyPath
	I0917 01:02:51.147751  178757 main.go:141] libmachine: (test-preload-959742) Calling .GetSSHKeyPath
	I0917 01:02:51.148018  178757 main.go:141] libmachine: (test-preload-959742) Calling .GetSSHUsername
	I0917 01:02:51.148230  178757 main.go:141] libmachine: Using SSH client type: native
	I0917 01:02:51.148443  178757 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.50.5 22 <nil> <nil>}
	I0917 01:02:51.148466  178757 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-959742' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-959742/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-959742' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 01:02:51.268045  178757 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 01:02:51.268079  178757 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21550-141589/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-141589/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-141589/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-141589/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-141589/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-141589/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-141589/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-141589/.minikube}
	I0917 01:02:51.268112  178757 buildroot.go:174] setting up certificates
	I0917 01:02:51.268147  178757 provision.go:84] configureAuth start
	I0917 01:02:51.268169  178757 main.go:141] libmachine: (test-preload-959742) Calling .GetMachineName
	I0917 01:02:51.268499  178757 main.go:141] libmachine: (test-preload-959742) Calling .GetIP
	I0917 01:02:51.271829  178757 main.go:141] libmachine: (test-preload-959742) DBG | domain test-preload-959742 has defined MAC address 52:54:00:e9:1c:f0 in network mk-test-preload-959742
	I0917 01:02:51.272256  178757 main.go:141] libmachine: (test-preload-959742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:1c:f0", ip: ""} in network mk-test-preload-959742: {Iface:virbr2 ExpiryTime:2025-09-17 02:02:47 +0000 UTC Type:0 Mac:52:54:00:e9:1c:f0 Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:test-preload-959742 Clientid:01:52:54:00:e9:1c:f0}
	I0917 01:02:51.272282  178757 main.go:141] libmachine: (test-preload-959742) DBG | domain test-preload-959742 has defined IP address 192.168.50.5 and MAC address 52:54:00:e9:1c:f0 in network mk-test-preload-959742
	I0917 01:02:51.272472  178757 main.go:141] libmachine: (test-preload-959742) Calling .GetSSHHostname
	I0917 01:02:51.274940  178757 main.go:141] libmachine: (test-preload-959742) DBG | domain test-preload-959742 has defined MAC address 52:54:00:e9:1c:f0 in network mk-test-preload-959742
	I0917 01:02:51.275287  178757 main.go:141] libmachine: (test-preload-959742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:1c:f0", ip: ""} in network mk-test-preload-959742: {Iface:virbr2 ExpiryTime:2025-09-17 02:02:47 +0000 UTC Type:0 Mac:52:54:00:e9:1c:f0 Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:test-preload-959742 Clientid:01:52:54:00:e9:1c:f0}
	I0917 01:02:51.275330  178757 main.go:141] libmachine: (test-preload-959742) DBG | domain test-preload-959742 has defined IP address 192.168.50.5 and MAC address 52:54:00:e9:1c:f0 in network mk-test-preload-959742
	I0917 01:02:51.275445  178757 provision.go:143] copyHostCerts
	I0917 01:02:51.275509  178757 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-141589/.minikube/ca.pem, removing ...
	I0917 01:02:51.275528  178757 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-141589/.minikube/ca.pem
	I0917 01:02:51.275611  178757 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-141589/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-141589/.minikube/ca.pem (1078 bytes)
	I0917 01:02:51.275741  178757 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-141589/.minikube/cert.pem, removing ...
	I0917 01:02:51.275752  178757 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-141589/.minikube/cert.pem
	I0917 01:02:51.275785  178757 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-141589/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-141589/.minikube/cert.pem (1123 bytes)
	I0917 01:02:51.275847  178757 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-141589/.minikube/key.pem, removing ...
	I0917 01:02:51.275869  178757 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-141589/.minikube/key.pem
	I0917 01:02:51.275904  178757 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-141589/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-141589/.minikube/key.pem (1675 bytes)
	I0917 01:02:51.275968  178757 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-141589/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-141589/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-141589/.minikube/certs/ca-key.pem org=jenkins.test-preload-959742 san=[127.0.0.1 192.168.50.5 localhost minikube test-preload-959742]
	I0917 01:02:51.587355  178757 provision.go:177] copyRemoteCerts
	I0917 01:02:51.587435  178757 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 01:02:51.587463  178757 main.go:141] libmachine: (test-preload-959742) Calling .GetSSHHostname
	I0917 01:02:51.590846  178757 main.go:141] libmachine: (test-preload-959742) DBG | domain test-preload-959742 has defined MAC address 52:54:00:e9:1c:f0 in network mk-test-preload-959742
	I0917 01:02:51.591274  178757 main.go:141] libmachine: (test-preload-959742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:1c:f0", ip: ""} in network mk-test-preload-959742: {Iface:virbr2 ExpiryTime:2025-09-17 02:02:47 +0000 UTC Type:0 Mac:52:54:00:e9:1c:f0 Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:test-preload-959742 Clientid:01:52:54:00:e9:1c:f0}
	I0917 01:02:51.591309  178757 main.go:141] libmachine: (test-preload-959742) DBG | domain test-preload-959742 has defined IP address 192.168.50.5 and MAC address 52:54:00:e9:1c:f0 in network mk-test-preload-959742
	I0917 01:02:51.591518  178757 main.go:141] libmachine: (test-preload-959742) Calling .GetSSHPort
	I0917 01:02:51.591749  178757 main.go:141] libmachine: (test-preload-959742) Calling .GetSSHKeyPath
	I0917 01:02:51.591928  178757 main.go:141] libmachine: (test-preload-959742) Calling .GetSSHUsername
	I0917 01:02:51.592062  178757 sshutil.go:53] new ssh client: &{IP:192.168.50.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21550-141589/.minikube/machines/test-preload-959742/id_rsa Username:docker}
	I0917 01:02:51.677343  178757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-141589/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 01:02:51.710557  178757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-141589/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0917 01:02:51.741900  178757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-141589/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 01:02:51.779202  178757 provision.go:87] duration metric: took 511.031485ms to configureAuth
	I0917 01:02:51.779237  178757 buildroot.go:189] setting minikube options for container-runtime
	I0917 01:02:51.779438  178757 config.go:182] Loaded profile config "test-preload-959742": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0917 01:02:51.779552  178757 main.go:141] libmachine: (test-preload-959742) Calling .GetSSHHostname
	I0917 01:02:51.782887  178757 main.go:141] libmachine: (test-preload-959742) DBG | domain test-preload-959742 has defined MAC address 52:54:00:e9:1c:f0 in network mk-test-preload-959742
	I0917 01:02:51.783296  178757 main.go:141] libmachine: (test-preload-959742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:1c:f0", ip: ""} in network mk-test-preload-959742: {Iface:virbr2 ExpiryTime:2025-09-17 02:02:47 +0000 UTC Type:0 Mac:52:54:00:e9:1c:f0 Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:test-preload-959742 Clientid:01:52:54:00:e9:1c:f0}
	I0917 01:02:51.783324  178757 main.go:141] libmachine: (test-preload-959742) DBG | domain test-preload-959742 has defined IP address 192.168.50.5 and MAC address 52:54:00:e9:1c:f0 in network mk-test-preload-959742
	I0917 01:02:51.783550  178757 main.go:141] libmachine: (test-preload-959742) Calling .GetSSHPort
	I0917 01:02:51.783761  178757 main.go:141] libmachine: (test-preload-959742) Calling .GetSSHKeyPath
	I0917 01:02:51.783966  178757 main.go:141] libmachine: (test-preload-959742) Calling .GetSSHKeyPath
	I0917 01:02:51.784129  178757 main.go:141] libmachine: (test-preload-959742) Calling .GetSSHUsername
	I0917 01:02:51.784290  178757 main.go:141] libmachine: Using SSH client type: native
	I0917 01:02:51.784556  178757 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.50.5 22 <nil> <nil>}
	I0917 01:02:51.784578  178757 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 01:02:52.043310  178757 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 01:02:52.043348  178757 machine.go:96] duration metric: took 1.145864609s to provisionDockerMachine
	I0917 01:02:52.043365  178757 start.go:293] postStartSetup for "test-preload-959742" (driver="kvm2")
	I0917 01:02:52.043379  178757 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 01:02:52.043432  178757 main.go:141] libmachine: (test-preload-959742) Calling .DriverName
	I0917 01:02:52.043878  178757 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 01:02:52.043912  178757 main.go:141] libmachine: (test-preload-959742) Calling .GetSSHHostname
	I0917 01:02:52.047491  178757 main.go:141] libmachine: (test-preload-959742) DBG | domain test-preload-959742 has defined MAC address 52:54:00:e9:1c:f0 in network mk-test-preload-959742
	I0917 01:02:52.047906  178757 main.go:141] libmachine: (test-preload-959742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:1c:f0", ip: ""} in network mk-test-preload-959742: {Iface:virbr2 ExpiryTime:2025-09-17 02:02:47 +0000 UTC Type:0 Mac:52:54:00:e9:1c:f0 Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:test-preload-959742 Clientid:01:52:54:00:e9:1c:f0}
	I0917 01:02:52.047945  178757 main.go:141] libmachine: (test-preload-959742) DBG | domain test-preload-959742 has defined IP address 192.168.50.5 and MAC address 52:54:00:e9:1c:f0 in network mk-test-preload-959742
	I0917 01:02:52.048178  178757 main.go:141] libmachine: (test-preload-959742) Calling .GetSSHPort
	I0917 01:02:52.048426  178757 main.go:141] libmachine: (test-preload-959742) Calling .GetSSHKeyPath
	I0917 01:02:52.048636  178757 main.go:141] libmachine: (test-preload-959742) Calling .GetSSHUsername
	I0917 01:02:52.048840  178757 sshutil.go:53] new ssh client: &{IP:192.168.50.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21550-141589/.minikube/machines/test-preload-959742/id_rsa Username:docker}
	I0917 01:02:52.135425  178757 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 01:02:52.140642  178757 info.go:137] Remote host: Buildroot 2025.02
	I0917 01:02:52.140677  178757 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-141589/.minikube/addons for local assets ...
	I0917 01:02:52.140785  178757 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-141589/.minikube/files for local assets ...
	I0917 01:02:52.140913  178757 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-141589/.minikube/files/etc/ssl/certs/1455302.pem -> 1455302.pem in /etc/ssl/certs
	I0917 01:02:52.141067  178757 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 01:02:52.153826  178757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-141589/.minikube/files/etc/ssl/certs/1455302.pem --> /etc/ssl/certs/1455302.pem (1708 bytes)
	I0917 01:02:52.185769  178757 start.go:296] duration metric: took 142.385242ms for postStartSetup
	I0917 01:02:52.185819  178757 fix.go:56] duration metric: took 17.018854105s for fixHost
	I0917 01:02:52.185845  178757 main.go:141] libmachine: (test-preload-959742) Calling .GetSSHHostname
	I0917 01:02:52.188976  178757 main.go:141] libmachine: (test-preload-959742) DBG | domain test-preload-959742 has defined MAC address 52:54:00:e9:1c:f0 in network mk-test-preload-959742
	I0917 01:02:52.189398  178757 main.go:141] libmachine: (test-preload-959742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:1c:f0", ip: ""} in network mk-test-preload-959742: {Iface:virbr2 ExpiryTime:2025-09-17 02:02:47 +0000 UTC Type:0 Mac:52:54:00:e9:1c:f0 Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:test-preload-959742 Clientid:01:52:54:00:e9:1c:f0}
	I0917 01:02:52.189427  178757 main.go:141] libmachine: (test-preload-959742) DBG | domain test-preload-959742 has defined IP address 192.168.50.5 and MAC address 52:54:00:e9:1c:f0 in network mk-test-preload-959742
	I0917 01:02:52.189623  178757 main.go:141] libmachine: (test-preload-959742) Calling .GetSSHPort
	I0917 01:02:52.189868  178757 main.go:141] libmachine: (test-preload-959742) Calling .GetSSHKeyPath
	I0917 01:02:52.190060  178757 main.go:141] libmachine: (test-preload-959742) Calling .GetSSHKeyPath
	I0917 01:02:52.190265  178757 main.go:141] libmachine: (test-preload-959742) Calling .GetSSHUsername
	I0917 01:02:52.190564  178757 main.go:141] libmachine: Using SSH client type: native
	I0917 01:02:52.190782  178757 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.50.5 22 <nil> <nil>}
	I0917 01:02:52.190794  178757 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 01:02:52.299943  178757 main.go:141] libmachine: SSH cmd err, output: <nil>: 1758070972.261348099
	
	I0917 01:02:52.299974  178757 fix.go:216] guest clock: 1758070972.261348099
	I0917 01:02:52.299982  178757 fix.go:229] Guest: 2025-09-17 01:02:52.261348099 +0000 UTC Remote: 2025-09-17 01:02:52.185823988 +0000 UTC m=+21.203275935 (delta=75.524111ms)
	I0917 01:02:52.300008  178757 fix.go:200] guest clock delta is within tolerance: 75.524111ms
	I0917 01:02:52.300035  178757 start.go:83] releasing machines lock for "test-preload-959742", held for 17.133084616s
	I0917 01:02:52.300065  178757 main.go:141] libmachine: (test-preload-959742) Calling .DriverName
	I0917 01:02:52.300361  178757 main.go:141] libmachine: (test-preload-959742) Calling .GetIP
	I0917 01:02:52.303684  178757 main.go:141] libmachine: (test-preload-959742) DBG | domain test-preload-959742 has defined MAC address 52:54:00:e9:1c:f0 in network mk-test-preload-959742
	I0917 01:02:52.304089  178757 main.go:141] libmachine: (test-preload-959742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:1c:f0", ip: ""} in network mk-test-preload-959742: {Iface:virbr2 ExpiryTime:2025-09-17 02:02:47 +0000 UTC Type:0 Mac:52:54:00:e9:1c:f0 Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:test-preload-959742 Clientid:01:52:54:00:e9:1c:f0}
	I0917 01:02:52.304121  178757 main.go:141] libmachine: (test-preload-959742) DBG | domain test-preload-959742 has defined IP address 192.168.50.5 and MAC address 52:54:00:e9:1c:f0 in network mk-test-preload-959742
	I0917 01:02:52.304257  178757 main.go:141] libmachine: (test-preload-959742) Calling .DriverName
	I0917 01:02:52.304943  178757 main.go:141] libmachine: (test-preload-959742) Calling .DriverName
	I0917 01:02:52.305177  178757 main.go:141] libmachine: (test-preload-959742) Calling .DriverName
	I0917 01:02:52.305308  178757 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 01:02:52.305369  178757 main.go:141] libmachine: (test-preload-959742) Calling .GetSSHHostname
	I0917 01:02:52.305415  178757 ssh_runner.go:195] Run: cat /version.json
	I0917 01:02:52.305437  178757 main.go:141] libmachine: (test-preload-959742) Calling .GetSSHHostname
	I0917 01:02:52.308565  178757 main.go:141] libmachine: (test-preload-959742) DBG | domain test-preload-959742 has defined MAC address 52:54:00:e9:1c:f0 in network mk-test-preload-959742
	I0917 01:02:52.308601  178757 main.go:141] libmachine: (test-preload-959742) DBG | domain test-preload-959742 has defined MAC address 52:54:00:e9:1c:f0 in network mk-test-preload-959742
	I0917 01:02:52.309102  178757 main.go:141] libmachine: (test-preload-959742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:1c:f0", ip: ""} in network mk-test-preload-959742: {Iface:virbr2 ExpiryTime:2025-09-17 02:02:47 +0000 UTC Type:0 Mac:52:54:00:e9:1c:f0 Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:test-preload-959742 Clientid:01:52:54:00:e9:1c:f0}
	I0917 01:02:52.309140  178757 main.go:141] libmachine: (test-preload-959742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:1c:f0", ip: ""} in network mk-test-preload-959742: {Iface:virbr2 ExpiryTime:2025-09-17 02:02:47 +0000 UTC Type:0 Mac:52:54:00:e9:1c:f0 Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:test-preload-959742 Clientid:01:52:54:00:e9:1c:f0}
	I0917 01:02:52.309164  178757 main.go:141] libmachine: (test-preload-959742) DBG | domain test-preload-959742 has defined IP address 192.168.50.5 and MAC address 52:54:00:e9:1c:f0 in network mk-test-preload-959742
	I0917 01:02:52.309193  178757 main.go:141] libmachine: (test-preload-959742) DBG | domain test-preload-959742 has defined IP address 192.168.50.5 and MAC address 52:54:00:e9:1c:f0 in network mk-test-preload-959742
	I0917 01:02:52.309403  178757 main.go:141] libmachine: (test-preload-959742) Calling .GetSSHPort
	I0917 01:02:52.309553  178757 main.go:141] libmachine: (test-preload-959742) Calling .GetSSHPort
	I0917 01:02:52.309645  178757 main.go:141] libmachine: (test-preload-959742) Calling .GetSSHKeyPath
	I0917 01:02:52.309755  178757 main.go:141] libmachine: (test-preload-959742) Calling .GetSSHKeyPath
	I0917 01:02:52.309801  178757 main.go:141] libmachine: (test-preload-959742) Calling .GetSSHUsername
	I0917 01:02:52.309944  178757 sshutil.go:53] new ssh client: &{IP:192.168.50.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21550-141589/.minikube/machines/test-preload-959742/id_rsa Username:docker}
	I0917 01:02:52.310000  178757 main.go:141] libmachine: (test-preload-959742) Calling .GetSSHUsername
	I0917 01:02:52.310177  178757 sshutil.go:53] new ssh client: &{IP:192.168.50.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21550-141589/.minikube/machines/test-preload-959742/id_rsa Username:docker}
	I0917 01:02:52.422997  178757 ssh_runner.go:195] Run: systemctl --version
	I0917 01:02:52.429793  178757 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 01:02:52.580386  178757 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0917 01:02:52.587722  178757 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 01:02:52.587818  178757 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 01:02:52.609289  178757 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0917 01:02:52.609322  178757 start.go:495] detecting cgroup driver to use...
	I0917 01:02:52.609402  178757 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 01:02:52.629400  178757 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 01:02:52.647285  178757 docker.go:218] disabling cri-docker service (if available) ...
	I0917 01:02:52.647354  178757 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 01:02:52.666023  178757 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 01:02:52.683598  178757 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 01:02:52.832177  178757 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 01:02:53.053119  178757 docker.go:234] disabling docker service ...
	I0917 01:02:53.053198  178757 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 01:02:53.071933  178757 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 01:02:53.089509  178757 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 01:02:53.248454  178757 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 01:02:53.396250  178757 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 01:02:53.418357  178757 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 01:02:53.443035  178757 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0917 01:02:53.443121  178757 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 01:02:53.456597  178757 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0917 01:02:53.456682  178757 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 01:02:53.470326  178757 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 01:02:53.483434  178757 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 01:02:53.496763  178757 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 01:02:53.510814  178757 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 01:02:53.523944  178757 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 01:02:53.545841  178757 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 01:02:53.558870  178757 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 01:02:53.570005  178757 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0917 01:02:53.570108  178757 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0917 01:02:53.590873  178757 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 01:02:53.603549  178757 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 01:02:53.745645  178757 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 01:02:53.881773  178757 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 01:02:53.881890  178757 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 01:02:53.887955  178757 start.go:563] Will wait 60s for crictl version
	I0917 01:02:53.888020  178757 ssh_runner.go:195] Run: which crictl
	I0917 01:02:53.892608  178757 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 01:02:53.939449  178757 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0917 01:02:53.939537  178757 ssh_runner.go:195] Run: crio --version
	I0917 01:02:53.971561  178757 ssh_runner.go:195] Run: crio --version
	I0917 01:02:54.003833  178757 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I0917 01:02:54.005362  178757 main.go:141] libmachine: (test-preload-959742) Calling .GetIP
	I0917 01:02:54.008308  178757 main.go:141] libmachine: (test-preload-959742) DBG | domain test-preload-959742 has defined MAC address 52:54:00:e9:1c:f0 in network mk-test-preload-959742
	I0917 01:02:54.008683  178757 main.go:141] libmachine: (test-preload-959742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:1c:f0", ip: ""} in network mk-test-preload-959742: {Iface:virbr2 ExpiryTime:2025-09-17 02:02:47 +0000 UTC Type:0 Mac:52:54:00:e9:1c:f0 Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:test-preload-959742 Clientid:01:52:54:00:e9:1c:f0}
	I0917 01:02:54.008715  178757 main.go:141] libmachine: (test-preload-959742) DBG | domain test-preload-959742 has defined IP address 192.168.50.5 and MAC address 52:54:00:e9:1c:f0 in network mk-test-preload-959742
	I0917 01:02:54.008988  178757 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0917 01:02:54.014238  178757 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 01:02:54.032625  178757 kubeadm.go:875] updating cluster {Name:test-preload-959742 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.37.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-959742 Namespace:defa
ult APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.5 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 01:02:54.032770  178757 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0917 01:02:54.032818  178757 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 01:02:54.074391  178757 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I0917 01:02:54.074472  178757 ssh_runner.go:195] Run: which lz4
	I0917 01:02:54.079234  178757 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0917 01:02:54.084579  178757 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0917 01:02:54.084613  178757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-141589/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I0917 01:02:55.727720  178757 crio.go:462] duration metric: took 1.648513728s to copy over tarball
	I0917 01:02:55.727797  178757 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0917 01:02:57.482010  178757 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.754187117s)
	I0917 01:02:57.482038  178757 crio.go:469] duration metric: took 1.7542875s to extract the tarball
	I0917 01:02:57.482047  178757 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0917 01:02:57.525169  178757 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 01:02:57.572074  178757 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 01:02:57.572108  178757 cache_images.go:85] Images are preloaded, skipping loading
	I0917 01:02:57.572126  178757 kubeadm.go:926] updating node { 192.168.50.5 8443 v1.32.0 crio true true} ...
	I0917 01:02:57.572286  178757 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-959742 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:test-preload-959742 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 01:02:57.572383  178757 ssh_runner.go:195] Run: crio config
	I0917 01:02:57.620348  178757 cni.go:84] Creating CNI manager for ""
	I0917 01:02:57.620368  178757 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 01:02:57.620379  178757 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 01:02:57.620401  178757 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.5 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-959742 NodeName:test-preload-959742 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 01:02:57.620513  178757 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-959742"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.5"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.5"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 01:02:57.620576  178757 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I0917 01:02:57.632797  178757 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 01:02:57.632894  178757 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 01:02:57.645321  178757 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0917 01:02:57.666928  178757 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 01:02:57.688343  178757 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I0917 01:02:57.710829  178757 ssh_runner.go:195] Run: grep 192.168.50.5	control-plane.minikube.internal$ /etc/hosts
	I0917 01:02:57.715417  178757 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.5	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 01:02:57.731702  178757 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 01:02:57.874327  178757 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 01:02:57.906357  178757 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/test-preload-959742 for IP: 192.168.50.5
	I0917 01:02:57.906393  178757 certs.go:194] generating shared ca certs ...
	I0917 01:02:57.906416  178757 certs.go:226] acquiring lock for ca certs: {Name:mk9185d5103eebb4e8c41dd45f840888861a3f37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:02:57.906637  178757 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-141589/.minikube/ca.key
	I0917 01:02:57.906718  178757 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-141589/.minikube/proxy-client-ca.key
	I0917 01:02:57.906737  178757 certs.go:256] generating profile certs ...
	I0917 01:02:57.906887  178757 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/test-preload-959742/client.key
	I0917 01:02:57.906997  178757 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/test-preload-959742/apiserver.key.8ae557ea
	I0917 01:02:57.907061  178757 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/test-preload-959742/proxy-client.key
	I0917 01:02:57.907225  178757 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-141589/.minikube/certs/145530.pem (1338 bytes)
	W0917 01:02:57.907285  178757 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-141589/.minikube/certs/145530_empty.pem, impossibly tiny 0 bytes
	I0917 01:02:57.907304  178757 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-141589/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 01:02:57.907340  178757 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-141589/.minikube/certs/ca.pem (1078 bytes)
	I0917 01:02:57.907374  178757 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-141589/.minikube/certs/cert.pem (1123 bytes)
	I0917 01:02:57.907406  178757 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-141589/.minikube/certs/key.pem (1675 bytes)
	I0917 01:02:57.907472  178757 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-141589/.minikube/files/etc/ssl/certs/1455302.pem (1708 bytes)
	I0917 01:02:57.908360  178757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-141589/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 01:02:57.947100  178757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-141589/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 01:02:57.984893  178757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-141589/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 01:02:58.018232  178757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-141589/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 01:02:58.051286  178757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/test-preload-959742/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0917 01:02:58.082704  178757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/test-preload-959742/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 01:02:58.116128  178757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/test-preload-959742/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 01:02:58.148380  178757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/test-preload-959742/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 01:02:58.179970  178757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-141589/.minikube/files/etc/ssl/certs/1455302.pem --> /usr/share/ca-certificates/1455302.pem (1708 bytes)
	I0917 01:02:58.211370  178757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-141589/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 01:02:58.242476  178757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-141589/.minikube/certs/145530.pem --> /usr/share/ca-certificates/145530.pem (1338 bytes)
	I0917 01:02:58.275561  178757 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 01:02:58.298535  178757 ssh_runner.go:195] Run: openssl version
	I0917 01:02:58.305678  178757 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1455302.pem && ln -fs /usr/share/ca-certificates/1455302.pem /etc/ssl/certs/1455302.pem"
	I0917 01:02:58.319642  178757 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1455302.pem
	I0917 01:02:58.325816  178757 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 00:07 /usr/share/ca-certificates/1455302.pem
	I0917 01:02:58.325898  178757 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1455302.pem
	I0917 01:02:58.334064  178757 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1455302.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 01:02:58.349786  178757 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 01:02:58.364144  178757 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 01:02:58.369876  178757 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:58 /usr/share/ca-certificates/minikubeCA.pem
	I0917 01:02:58.369960  178757 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 01:02:58.377771  178757 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 01:02:58.392556  178757 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145530.pem && ln -fs /usr/share/ca-certificates/145530.pem /etc/ssl/certs/145530.pem"
	I0917 01:02:58.407002  178757 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145530.pem
	I0917 01:02:58.412721  178757 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 00:07 /usr/share/ca-certificates/145530.pem
	I0917 01:02:58.412805  178757 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145530.pem
	I0917 01:02:58.420913  178757 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/145530.pem /etc/ssl/certs/51391683.0"
	I0917 01:02:58.435754  178757 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 01:02:58.441846  178757 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 01:02:58.450031  178757 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 01:02:58.457787  178757 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 01:02:58.466074  178757 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 01:02:58.473755  178757 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 01:02:58.481880  178757 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 01:02:58.490146  178757 kubeadm.go:392] StartCluster: {Name:test-preload-959742 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.37.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-959742 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.5 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 01:02:58.490277  178757 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0917 01:02:58.490342  178757 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 01:02:58.531782  178757 cri.go:89] found id: ""
	I0917 01:02:58.531903  178757 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 01:02:58.545206  178757 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0917 01:02:58.545230  178757 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0917 01:02:58.545281  178757 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0917 01:02:58.558510  178757 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0917 01:02:58.559114  178757 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-959742" does not appear in /home/jenkins/minikube-integration/21550-141589/kubeconfig
	I0917 01:02:58.559273  178757 kubeconfig.go:62] /home/jenkins/minikube-integration/21550-141589/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-959742" cluster setting kubeconfig missing "test-preload-959742" context setting]
	I0917 01:02:58.559629  178757 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-141589/kubeconfig: {Name:mk94de3540a2264fcc25d797d3876af7c7bbc524 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:02:58.560346  178757 kapi.go:59] client config for test-preload-959742: &rest.Config{Host:"https://192.168.50.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-141589/.minikube/profiles/test-preload-959742/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-141589/.minikube/profiles/test-preload-959742/client.key", CAFile:"/home/jenkins/minikube-integration/21550-141589/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0917 01:02:58.560923  178757 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0917 01:02:58.560944  178757 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0917 01:02:58.560952  178757 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0917 01:02:58.560958  178757 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0917 01:02:58.560964  178757 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0917 01:02:58.561411  178757 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0917 01:02:58.573688  178757 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.50.5
	I0917 01:02:58.573731  178757 kubeadm.go:1152] stopping kube-system containers ...
	I0917 01:02:58.573747  178757 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0917 01:02:58.573831  178757 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 01:02:58.629462  178757 cri.go:89] found id: ""
	I0917 01:02:58.629556  178757 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0917 01:02:58.660824  178757 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 01:02:58.673955  178757 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 01:02:58.673984  178757 kubeadm.go:157] found existing configuration files:
	
	I0917 01:02:58.674047  178757 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 01:02:58.685459  178757 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 01:02:58.685534  178757 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 01:02:58.697968  178757 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 01:02:58.710233  178757 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 01:02:58.710307  178757 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 01:02:58.723447  178757 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 01:02:58.735259  178757 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 01:02:58.735343  178757 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 01:02:58.748141  178757 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 01:02:58.760096  178757 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 01:02:58.760174  178757 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 01:02:58.773059  178757 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 01:02:58.786356  178757 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 01:02:58.845657  178757 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 01:02:59.917172  178757 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.07146868s)
	I0917 01:02:59.917204  178757 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0917 01:03:00.186926  178757 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 01:03:00.258695  178757 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0917 01:03:00.346127  178757 api_server.go:52] waiting for apiserver process to appear ...
	I0917 01:03:00.346246  178757 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 01:03:00.847026  178757 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 01:03:01.346796  178757 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 01:03:01.847104  178757 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 01:03:01.882812  178757 api_server.go:72] duration metric: took 1.536687429s to wait for apiserver process to appear ...
	I0917 01:03:01.882865  178757 api_server.go:88] waiting for apiserver healthz status ...
	I0917 01:03:01.882896  178757 api_server.go:253] Checking apiserver healthz at https://192.168.50.5:8443/healthz ...
	I0917 01:03:01.883519  178757 api_server.go:269] stopped: https://192.168.50.5:8443/healthz: Get "https://192.168.50.5:8443/healthz": dial tcp 192.168.50.5:8443: connect: connection refused
	I0917 01:03:02.383052  178757 api_server.go:253] Checking apiserver healthz at https://192.168.50.5:8443/healthz ...
	I0917 01:03:04.637573  178757 api_server.go:279] https://192.168.50.5:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0917 01:03:04.637605  178757 api_server.go:103] status: https://192.168.50.5:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0917 01:03:04.637619  178757 api_server.go:253] Checking apiserver healthz at https://192.168.50.5:8443/healthz ...
	I0917 01:03:04.649610  178757 api_server.go:279] https://192.168.50.5:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0917 01:03:04.649647  178757 api_server.go:103] status: https://192.168.50.5:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0917 01:03:04.883058  178757 api_server.go:253] Checking apiserver healthz at https://192.168.50.5:8443/healthz ...
	I0917 01:03:04.888876  178757 api_server.go:279] https://192.168.50.5:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 01:03:04.888914  178757 api_server.go:103] status: https://192.168.50.5:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 01:03:05.382981  178757 api_server.go:253] Checking apiserver healthz at https://192.168.50.5:8443/healthz ...
	I0917 01:03:05.388218  178757 api_server.go:279] https://192.168.50.5:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 01:03:05.388250  178757 api_server.go:103] status: https://192.168.50.5:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 01:03:05.883972  178757 api_server.go:253] Checking apiserver healthz at https://192.168.50.5:8443/healthz ...
	I0917 01:03:05.891363  178757 api_server.go:279] https://192.168.50.5:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0917 01:03:05.891393  178757 api_server.go:103] status: https://192.168.50.5:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0917 01:03:06.383849  178757 api_server.go:253] Checking apiserver healthz at https://192.168.50.5:8443/healthz ...
	I0917 01:03:06.390341  178757 api_server.go:279] https://192.168.50.5:8443/healthz returned 200:
	ok
	I0917 01:03:06.399972  178757 api_server.go:141] control plane version: v1.32.0
	I0917 01:03:06.400018  178757 api_server.go:131] duration metric: took 4.517139339s to wait for apiserver health ...
	I0917 01:03:06.400029  178757 cni.go:84] Creating CNI manager for ""
	I0917 01:03:06.400035  178757 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 01:03:06.401799  178757 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I0917 01:03:06.403952  178757 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0917 01:03:06.427132  178757 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0917 01:03:06.464776  178757 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 01:03:06.473570  178757 system_pods.go:59] 7 kube-system pods found
	I0917 01:03:06.473608  178757 system_pods.go:61] "coredns-668d6bf9bc-csbgr" [abf70e6f-d95c-4efc-b8d2-6668b8e15546] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 01:03:06.473623  178757 system_pods.go:61] "etcd-test-preload-959742" [12ed07b3-4b04-490a-a878-a821a639cee0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 01:03:06.473631  178757 system_pods.go:61] "kube-apiserver-test-preload-959742" [a04cf0ad-7af4-4137-8f47-ae3a8669e948] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 01:03:06.473636  178757 system_pods.go:61] "kube-controller-manager-test-preload-959742" [9ac884a6-10a3-4e94-8ce5-53d60980d925] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 01:03:06.473642  178757 system_pods.go:61] "kube-proxy-xfm6w" [43532aea-4198-49b7-be80-0ad52a4970c3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0917 01:03:06.473649  178757 system_pods.go:61] "kube-scheduler-test-preload-959742" [e05bbd4c-b0af-4cd1-a123-4b0d26512323] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 01:03:06.473654  178757 system_pods.go:61] "storage-provisioner" [ae22a5c6-623f-4ea1-befa-2d39c77970a9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0917 01:03:06.473660  178757 system_pods.go:74] duration metric: took 8.850514ms to wait for pod list to return data ...
	I0917 01:03:06.473667  178757 node_conditions.go:102] verifying NodePressure condition ...
	I0917 01:03:06.488609  178757 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 01:03:06.488647  178757 node_conditions.go:123] node cpu capacity is 2
	I0917 01:03:06.488660  178757 node_conditions.go:105] duration metric: took 14.988487ms to run NodePressure ...
	I0917 01:03:06.488678  178757 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0917 01:03:06.914877  178757 kubeadm.go:720] waiting for restarted kubelet to initialise ...
	I0917 01:03:06.920768  178757 kubeadm.go:735] kubelet initialised
	I0917 01:03:06.920797  178757 kubeadm.go:736] duration metric: took 5.884765ms waiting for restarted kubelet to initialise ...
	I0917 01:03:06.920815  178757 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0917 01:03:06.942976  178757 ops.go:34] apiserver oom_adj: -16
	I0917 01:03:06.943005  178757 kubeadm.go:593] duration metric: took 8.39776932s to restartPrimaryControlPlane
	I0917 01:03:06.943023  178757 kubeadm.go:394] duration metric: took 8.452881941s to StartCluster
	I0917 01:03:06.943046  178757 settings.go:142] acquiring lock: {Name:mkba5c2f6664f4802b257b08a521179f4376b493 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:03:06.943155  178757 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21550-141589/kubeconfig
	I0917 01:03:06.943900  178757 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-141589/kubeconfig: {Name:mk94de3540a2264fcc25d797d3876af7c7bbc524 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:03:06.944223  178757 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.5 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 01:03:06.944418  178757 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 01:03:06.944575  178757 addons.go:69] Setting storage-provisioner=true in profile "test-preload-959742"
	I0917 01:03:06.944603  178757 addons.go:238] Setting addon storage-provisioner=true in "test-preload-959742"
	W0917 01:03:06.944612  178757 addons.go:247] addon storage-provisioner should already be in state true
	I0917 01:03:06.944608  178757 addons.go:69] Setting default-storageclass=true in profile "test-preload-959742"
	I0917 01:03:06.944636  178757 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-959742"
	I0917 01:03:06.944658  178757 host.go:66] Checking if "test-preload-959742" exists ...
	I0917 01:03:06.944669  178757 config.go:182] Loaded profile config "test-preload-959742": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0917 01:03:06.945134  178757 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 01:03:06.945195  178757 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 01:03:06.945307  178757 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 01:03:06.945352  178757 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 01:03:06.946185  178757 out.go:179] * Verifying Kubernetes components...
	I0917 01:03:06.947799  178757 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 01:03:06.961395  178757 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40651
	I0917 01:03:06.961393  178757 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38669
	I0917 01:03:06.962010  178757 main.go:141] libmachine: () Calling .GetVersion
	I0917 01:03:06.962112  178757 main.go:141] libmachine: () Calling .GetVersion
	I0917 01:03:06.962586  178757 main.go:141] libmachine: Using API Version  1
	I0917 01:03:06.962614  178757 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 01:03:06.962679  178757 main.go:141] libmachine: Using API Version  1
	I0917 01:03:06.962698  178757 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 01:03:06.963102  178757 main.go:141] libmachine: () Calling .GetMachineName
	I0917 01:03:06.963112  178757 main.go:141] libmachine: () Calling .GetMachineName
	I0917 01:03:06.963312  178757 main.go:141] libmachine: (test-preload-959742) Calling .GetState
	I0917 01:03:06.963755  178757 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 01:03:06.963809  178757 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 01:03:06.966431  178757 kapi.go:59] client config for test-preload-959742: &rest.Config{Host:"https://192.168.50.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-141589/.minikube/profiles/test-preload-959742/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-141589/.minikube/profiles/test-preload-959742/client.key", CAFile:"/home/jenkins/minikube-integration/21550-141589/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0917 01:03:06.966728  178757 addons.go:238] Setting addon default-storageclass=true in "test-preload-959742"
	W0917 01:03:06.966747  178757 addons.go:247] addon default-storageclass should already be in state true
	I0917 01:03:06.966781  178757 host.go:66] Checking if "test-preload-959742" exists ...
	I0917 01:03:06.967121  178757 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 01:03:06.967175  178757 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 01:03:06.981666  178757 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36923
	I0917 01:03:06.982263  178757 main.go:141] libmachine: () Calling .GetVersion
	I0917 01:03:06.982885  178757 main.go:141] libmachine: Using API Version  1
	I0917 01:03:06.982918  178757 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 01:03:06.983146  178757 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40757
	I0917 01:03:06.983369  178757 main.go:141] libmachine: () Calling .GetMachineName
	I0917 01:03:06.983619  178757 main.go:141] libmachine: (test-preload-959742) Calling .GetState
	I0917 01:03:06.983723  178757 main.go:141] libmachine: () Calling .GetVersion
	I0917 01:03:06.984321  178757 main.go:141] libmachine: Using API Version  1
	I0917 01:03:06.984347  178757 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 01:03:06.984927  178757 main.go:141] libmachine: () Calling .GetMachineName
	I0917 01:03:06.985577  178757 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 01:03:06.985634  178757 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 01:03:06.986085  178757 main.go:141] libmachine: (test-preload-959742) Calling .DriverName
	I0917 01:03:06.988223  178757 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 01:03:06.989368  178757 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 01:03:06.989390  178757 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0917 01:03:06.989410  178757 main.go:141] libmachine: (test-preload-959742) Calling .GetSSHHostname
	I0917 01:03:06.993619  178757 main.go:141] libmachine: (test-preload-959742) DBG | domain test-preload-959742 has defined MAC address 52:54:00:e9:1c:f0 in network mk-test-preload-959742
	I0917 01:03:06.994224  178757 main.go:141] libmachine: (test-preload-959742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:1c:f0", ip: ""} in network mk-test-preload-959742: {Iface:virbr2 ExpiryTime:2025-09-17 02:02:47 +0000 UTC Type:0 Mac:52:54:00:e9:1c:f0 Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:test-preload-959742 Clientid:01:52:54:00:e9:1c:f0}
	I0917 01:03:06.994261  178757 main.go:141] libmachine: (test-preload-959742) DBG | domain test-preload-959742 has defined IP address 192.168.50.5 and MAC address 52:54:00:e9:1c:f0 in network mk-test-preload-959742
	I0917 01:03:06.994495  178757 main.go:141] libmachine: (test-preload-959742) Calling .GetSSHPort
	I0917 01:03:06.994711  178757 main.go:141] libmachine: (test-preload-959742) Calling .GetSSHKeyPath
	I0917 01:03:06.994997  178757 main.go:141] libmachine: (test-preload-959742) Calling .GetSSHUsername
	I0917 01:03:06.995175  178757 sshutil.go:53] new ssh client: &{IP:192.168.50.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21550-141589/.minikube/machines/test-preload-959742/id_rsa Username:docker}
	I0917 01:03:07.004585  178757 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46755
	I0917 01:03:07.005506  178757 main.go:141] libmachine: () Calling .GetVersion
	I0917 01:03:07.006274  178757 main.go:141] libmachine: Using API Version  1
	I0917 01:03:07.006310  178757 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 01:03:07.006819  178757 main.go:141] libmachine: () Calling .GetMachineName
	I0917 01:03:07.007038  178757 main.go:141] libmachine: (test-preload-959742) Calling .GetState
	I0917 01:03:07.009601  178757 main.go:141] libmachine: (test-preload-959742) Calling .DriverName
	I0917 01:03:07.009876  178757 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0917 01:03:07.009895  178757 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0917 01:03:07.009917  178757 main.go:141] libmachine: (test-preload-959742) Calling .GetSSHHostname
	I0917 01:03:07.014061  178757 main.go:141] libmachine: (test-preload-959742) DBG | domain test-preload-959742 has defined MAC address 52:54:00:e9:1c:f0 in network mk-test-preload-959742
	I0917 01:03:07.014716  178757 main.go:141] libmachine: (test-preload-959742) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:1c:f0", ip: ""} in network mk-test-preload-959742: {Iface:virbr2 ExpiryTime:2025-09-17 02:02:47 +0000 UTC Type:0 Mac:52:54:00:e9:1c:f0 Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:test-preload-959742 Clientid:01:52:54:00:e9:1c:f0}
	I0917 01:03:07.014747  178757 main.go:141] libmachine: (test-preload-959742) DBG | domain test-preload-959742 has defined IP address 192.168.50.5 and MAC address 52:54:00:e9:1c:f0 in network mk-test-preload-959742
	I0917 01:03:07.014978  178757 main.go:141] libmachine: (test-preload-959742) Calling .GetSSHPort
	I0917 01:03:07.015221  178757 main.go:141] libmachine: (test-preload-959742) Calling .GetSSHKeyPath
	I0917 01:03:07.015536  178757 main.go:141] libmachine: (test-preload-959742) Calling .GetSSHUsername
	I0917 01:03:07.015817  178757 sshutil.go:53] new ssh client: &{IP:192.168.50.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21550-141589/.minikube/machines/test-preload-959742/id_rsa Username:docker}
	I0917 01:03:07.181115  178757 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 01:03:07.202319  178757 node_ready.go:35] waiting up to 6m0s for node "test-preload-959742" to be "Ready" ...
	I0917 01:03:07.309034  178757 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 01:03:07.345597  178757 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0917 01:03:08.168953  178757 main.go:141] libmachine: Making call to close driver server
	I0917 01:03:08.168978  178757 main.go:141] libmachine: Making call to close driver server
	I0917 01:03:08.169008  178757 main.go:141] libmachine: (test-preload-959742) Calling .Close
	I0917 01:03:08.168991  178757 main.go:141] libmachine: (test-preload-959742) Calling .Close
	I0917 01:03:08.169337  178757 main.go:141] libmachine: Successfully made call to close driver server
	I0917 01:03:08.169355  178757 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 01:03:08.169364  178757 main.go:141] libmachine: Making call to close driver server
	I0917 01:03:08.169371  178757 main.go:141] libmachine: (test-preload-959742) Calling .Close
	I0917 01:03:08.169421  178757 main.go:141] libmachine: Successfully made call to close driver server
	I0917 01:03:08.169445  178757 main.go:141] libmachine: (test-preload-959742) DBG | Closing plugin on server side
	I0917 01:03:08.169449  178757 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 01:03:08.169471  178757 main.go:141] libmachine: Making call to close driver server
	I0917 01:03:08.169492  178757 main.go:141] libmachine: (test-preload-959742) Calling .Close
	I0917 01:03:08.169601  178757 main.go:141] libmachine: (test-preload-959742) DBG | Closing plugin on server side
	I0917 01:03:08.169608  178757 main.go:141] libmachine: Successfully made call to close driver server
	I0917 01:03:08.169620  178757 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 01:03:08.169681  178757 main.go:141] libmachine: Successfully made call to close driver server
	I0917 01:03:08.169694  178757 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 01:03:08.178804  178757 main.go:141] libmachine: Making call to close driver server
	I0917 01:03:08.178829  178757 main.go:141] libmachine: (test-preload-959742) Calling .Close
	I0917 01:03:08.179137  178757 main.go:141] libmachine: Successfully made call to close driver server
	I0917 01:03:08.179155  178757 main.go:141] libmachine: Making call to close connection to plugin binary
	I0917 01:03:08.179165  178757 main.go:141] libmachine: (test-preload-959742) DBG | Closing plugin on server side
	I0917 01:03:08.181536  178757 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0917 01:03:08.182643  178757 addons.go:514] duration metric: took 1.238243616s for enable addons: enabled=[storage-provisioner default-storageclass]
	W0917 01:03:09.206550  178757 node_ready.go:57] node "test-preload-959742" has "Ready":"False" status (will retry)
	W0917 01:03:11.706243  178757 node_ready.go:57] node "test-preload-959742" has "Ready":"False" status (will retry)
	W0917 01:03:13.707083  178757 node_ready.go:57] node "test-preload-959742" has "Ready":"False" status (will retry)
	I0917 01:03:15.206668  178757 node_ready.go:49] node "test-preload-959742" is "Ready"
	I0917 01:03:15.206711  178757 node_ready.go:38] duration metric: took 8.004348804s for node "test-preload-959742" to be "Ready" ...
	I0917 01:03:15.206740  178757 api_server.go:52] waiting for apiserver process to appear ...
	I0917 01:03:15.206798  178757 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 01:03:15.229064  178757 api_server.go:72] duration metric: took 8.284792121s to wait for apiserver process to appear ...
	I0917 01:03:15.229103  178757 api_server.go:88] waiting for apiserver healthz status ...
	I0917 01:03:15.229127  178757 api_server.go:253] Checking apiserver healthz at https://192.168.50.5:8443/healthz ...
	I0917 01:03:15.234580  178757 api_server.go:279] https://192.168.50.5:8443/healthz returned 200:
	ok
	I0917 01:03:15.235815  178757 api_server.go:141] control plane version: v1.32.0
	I0917 01:03:15.235841  178757 api_server.go:131] duration metric: took 6.728901ms to wait for apiserver health ...
	I0917 01:03:15.235862  178757 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 01:03:15.239686  178757 system_pods.go:59] 7 kube-system pods found
	I0917 01:03:15.239714  178757 system_pods.go:61] "coredns-668d6bf9bc-csbgr" [abf70e6f-d95c-4efc-b8d2-6668b8e15546] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 01:03:15.239722  178757 system_pods.go:61] "etcd-test-preload-959742" [12ed07b3-4b04-490a-a878-a821a639cee0] Running
	I0917 01:03:15.239735  178757 system_pods.go:61] "kube-apiserver-test-preload-959742" [a04cf0ad-7af4-4137-8f47-ae3a8669e948] Running
	I0917 01:03:15.239746  178757 system_pods.go:61] "kube-controller-manager-test-preload-959742" [9ac884a6-10a3-4e94-8ce5-53d60980d925] Running
	I0917 01:03:15.239752  178757 system_pods.go:61] "kube-proxy-xfm6w" [43532aea-4198-49b7-be80-0ad52a4970c3] Running
	I0917 01:03:15.239761  178757 system_pods.go:61] "kube-scheduler-test-preload-959742" [e05bbd4c-b0af-4cd1-a123-4b0d26512323] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 01:03:15.239766  178757 system_pods.go:61] "storage-provisioner" [ae22a5c6-623f-4ea1-befa-2d39c77970a9] Running
	I0917 01:03:15.239775  178757 system_pods.go:74] duration metric: took 3.905322ms to wait for pod list to return data ...
	I0917 01:03:15.239783  178757 default_sa.go:34] waiting for default service account to be created ...
	I0917 01:03:15.242375  178757 default_sa.go:45] found service account: "default"
	I0917 01:03:15.242406  178757 default_sa.go:55] duration metric: took 2.614503ms for default service account to be created ...
	I0917 01:03:15.242427  178757 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 01:03:15.245257  178757 system_pods.go:86] 7 kube-system pods found
	I0917 01:03:15.245291  178757 system_pods.go:89] "coredns-668d6bf9bc-csbgr" [abf70e6f-d95c-4efc-b8d2-6668b8e15546] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 01:03:15.245300  178757 system_pods.go:89] "etcd-test-preload-959742" [12ed07b3-4b04-490a-a878-a821a639cee0] Running
	I0917 01:03:15.245309  178757 system_pods.go:89] "kube-apiserver-test-preload-959742" [a04cf0ad-7af4-4137-8f47-ae3a8669e948] Running
	I0917 01:03:15.245321  178757 system_pods.go:89] "kube-controller-manager-test-preload-959742" [9ac884a6-10a3-4e94-8ce5-53d60980d925] Running
	I0917 01:03:15.245327  178757 system_pods.go:89] "kube-proxy-xfm6w" [43532aea-4198-49b7-be80-0ad52a4970c3] Running
	I0917 01:03:15.245341  178757 system_pods.go:89] "kube-scheduler-test-preload-959742" [e05bbd4c-b0af-4cd1-a123-4b0d26512323] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0917 01:03:15.245352  178757 system_pods.go:89] "storage-provisioner" [ae22a5c6-623f-4ea1-befa-2d39c77970a9] Running
	I0917 01:03:15.245362  178757 system_pods.go:126] duration metric: took 2.927666ms to wait for k8s-apps to be running ...
	I0917 01:03:15.245375  178757 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 01:03:15.245441  178757 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 01:03:15.262207  178757 system_svc.go:56] duration metric: took 16.818477ms WaitForService to wait for kubelet
	I0917 01:03:15.262241  178757 kubeadm.go:578] duration metric: took 8.317979544s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 01:03:15.262260  178757 node_conditions.go:102] verifying NodePressure condition ...
	I0917 01:03:15.267137  178757 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0917 01:03:15.267164  178757 node_conditions.go:123] node cpu capacity is 2
	I0917 01:03:15.267176  178757 node_conditions.go:105] duration metric: took 4.911218ms to run NodePressure ...
	I0917 01:03:15.267196  178757 start.go:241] waiting for startup goroutines ...
	I0917 01:03:15.267203  178757 start.go:246] waiting for cluster config update ...
	I0917 01:03:15.267214  178757 start.go:255] writing updated cluster config ...
	I0917 01:03:15.267483  178757 ssh_runner.go:195] Run: rm -f paused
	I0917 01:03:15.273157  178757 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0917 01:03:15.273707  178757 kapi.go:59] client config for test-preload-959742: &rest.Config{Host:"https://192.168.50.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21550-141589/.minikube/profiles/test-preload-959742/client.crt", KeyFile:"/home/jenkins/minikube-integration/21550-141589/.minikube/profiles/test-preload-959742/client.key", CAFile:"/home/jenkins/minikube-integration/21550-141589/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4620), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0917 01:03:15.276577  178757 pod_ready.go:83] waiting for pod "coredns-668d6bf9bc-csbgr" in "kube-system" namespace to be "Ready" or be gone ...
	W0917 01:03:17.284132  178757 pod_ready.go:104] pod "coredns-668d6bf9bc-csbgr" is not "Ready", error: <nil>
	W0917 01:03:19.783979  178757 pod_ready.go:104] pod "coredns-668d6bf9bc-csbgr" is not "Ready", error: <nil>
	W0917 01:03:21.784651  178757 pod_ready.go:104] pod "coredns-668d6bf9bc-csbgr" is not "Ready", error: <nil>
	I0917 01:03:23.782491  178757 pod_ready.go:94] pod "coredns-668d6bf9bc-csbgr" is "Ready"
	I0917 01:03:23.782520  178757 pod_ready.go:86] duration metric: took 8.505920601s for pod "coredns-668d6bf9bc-csbgr" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:03:23.785258  178757 pod_ready.go:83] waiting for pod "etcd-test-preload-959742" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:03:23.789991  178757 pod_ready.go:94] pod "etcd-test-preload-959742" is "Ready"
	I0917 01:03:23.790020  178757 pod_ready.go:86] duration metric: took 4.73578ms for pod "etcd-test-preload-959742" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:03:23.792239  178757 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-959742" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:03:23.797555  178757 pod_ready.go:94] pod "kube-apiserver-test-preload-959742" is "Ready"
	I0917 01:03:23.797598  178757 pod_ready.go:86] duration metric: took 5.332691ms for pod "kube-apiserver-test-preload-959742" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:03:23.799786  178757 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-959742" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:03:23.980752  178757 pod_ready.go:94] pod "kube-controller-manager-test-preload-959742" is "Ready"
	I0917 01:03:23.980785  178757 pod_ready.go:86] duration metric: took 180.970999ms for pod "kube-controller-manager-test-preload-959742" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:03:24.181294  178757 pod_ready.go:83] waiting for pod "kube-proxy-xfm6w" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:03:24.581183  178757 pod_ready.go:94] pod "kube-proxy-xfm6w" is "Ready"
	I0917 01:03:24.581214  178757 pod_ready.go:86] duration metric: took 399.894132ms for pod "kube-proxy-xfm6w" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:03:24.781678  178757 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-959742" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:03:25.180477  178757 pod_ready.go:94] pod "kube-scheduler-test-preload-959742" is "Ready"
	I0917 01:03:25.180520  178757 pod_ready.go:86] duration metric: took 398.799319ms for pod "kube-scheduler-test-preload-959742" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:03:25.180531  178757 pod_ready.go:40] duration metric: took 9.907334309s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0917 01:03:25.228415  178757 start.go:617] kubectl: 1.34.1, cluster: 1.32.0 (minor skew: 2)
	I0917 01:03:25.230402  178757 out.go:203] 
	W0917 01:03:25.231964  178757 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.32.0.
	I0917 01:03:25.233725  178757 out.go:179]   - Want kubectl v1.32.0? Try 'minikube kubectl -- get pods -A'
	I0917 01:03:25.235515  178757 out.go:179] * Done! kubectl is now configured to use "test-preload-959742" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 17 01:03:26 test-preload-959742 crio[827]: time="2025-09-17 01:03:26.269253862Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3c5c2e49-128b-46ad-8fca-a2471bd2dfea name=/runtime.v1.RuntimeService/Version
	Sep 17 01:03:26 test-preload-959742 crio[827]: time="2025-09-17 01:03:26.270920184Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=544aff14-3daf-483d-9fcd-acdc915df822 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 01:03:26 test-preload-959742 crio[827]: time="2025-09-17 01:03:26.271395530Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758071006271365513,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=544aff14-3daf-483d-9fcd-acdc915df822 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 01:03:26 test-preload-959742 crio[827]: time="2025-09-17 01:03:26.272338972Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7bc905c4-1893-4afe-bdc4-b38103bd009e name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 01:03:26 test-preload-959742 crio[827]: time="2025-09-17 01:03:26.272534388Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7bc905c4-1893-4afe-bdc4-b38103bd009e name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 01:03:26 test-preload-959742 crio[827]: time="2025-09-17 01:03:26.272718250Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f7e4ce0677f78b0eb4d06b85ceb5a9e575ecb988efc5ce4013c7c26e64b359e4,PodSandboxId:01d319be65d8a9d2cec237d54b8bcd5c52c7e08c218a71434ad1f5cb70fd0c56,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1758070993409368851,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-csbgr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf70e6f-d95c-4efc-b8d2-6668b8e15546,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bad1e85a828db4212d1be3954fac4f42155a36b07c635f0745fb47bcd442ec3b,PodSandboxId:00b90500fd0b36f47839693699b3ec09ac67184ca116e14d3500f0ab371fa977,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1758070985856505586,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: ae22a5c6-623f-4ea1-befa-2d39c77970a9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d032c0b19a8ad6558c563cd56850028f0aacad1e0c4deac4b65ddfc459bb3350,PodSandboxId:f2302f7a05524b4ef44659a6e28d5dc41f6e223046230d77dd4965a06659f480,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1758070985756786809,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xfm6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43
532aea-4198-49b7-be80-0ad52a4970c3,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76fdc3f82622dc89965280db1741d97a2622a7b82e867302ca93d85009925ef6,PodSandboxId:87aa3c989ef4154bc281a4c6f82e1f29e8d60271edfed440407f2cb000e08afb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1758070981442206437,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-959742,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d24173e5
ce677acef80f0143144dcb7,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a350fed20eb24eef39efa2fefb9b9792828aec4e252a8256a0532feed36f056f,PodSandboxId:0182ac4bda0e63084d14e8556a2bac2038501463eee969d560aca4fa7b1b3fab,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1758070981437270861,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-959742,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a08ee9374c2bd444c5de
489987467315,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40358b986c0c41bc5d8b8627681ab0656ece95c8fbea738616cbe50924a8655a,PodSandboxId:e7b11a45907c24de35a4c7aa193f11561db321c5ea063ae9aa3151dbc3143909,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1758070981423646644,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-959742,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8fce6795925843a879c31d0e16617c4,},Annotations:map[string]str
ing{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc78de9e92bb78d4d80e93a4ad23adcf32583701c0e8968c54bdd5624cd1d463,PodSandboxId:8471cf3521ece6c718ab2ebcc3d7fe9c0e3f6e54ca9afd5243e85a3ffb571816,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1758070981322168842,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-959742,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a74b4477b8ace10008f69413002983c2,},Annotation
s:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7bc905c4-1893-4afe-bdc4-b38103bd009e name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 01:03:26 test-preload-959742 crio[827]: time="2025-09-17 01:03:26.315021208Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bbe49e56-3576-415c-835e-95278a4f54d5 name=/runtime.v1.RuntimeService/Version
	Sep 17 01:03:26 test-preload-959742 crio[827]: time="2025-09-17 01:03:26.315146770Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bbe49e56-3576-415c-835e-95278a4f54d5 name=/runtime.v1.RuntimeService/Version
	Sep 17 01:03:26 test-preload-959742 crio[827]: time="2025-09-17 01:03:26.316924952Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=61e5ca49-219f-4c64-8628-0bbf1b4663c5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 01:03:26 test-preload-959742 crio[827]: time="2025-09-17 01:03:26.317606479Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758071006317582990,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=61e5ca49-219f-4c64-8628-0bbf1b4663c5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 01:03:26 test-preload-959742 crio[827]: time="2025-09-17 01:03:26.318319574Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c9cd76a0-41c1-4d1f-b1cd-931cce416ad8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 01:03:26 test-preload-959742 crio[827]: time="2025-09-17 01:03:26.318469417Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c9cd76a0-41c1-4d1f-b1cd-931cce416ad8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 01:03:26 test-preload-959742 crio[827]: time="2025-09-17 01:03:26.319314827Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f7e4ce0677f78b0eb4d06b85ceb5a9e575ecb988efc5ce4013c7c26e64b359e4,PodSandboxId:01d319be65d8a9d2cec237d54b8bcd5c52c7e08c218a71434ad1f5cb70fd0c56,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1758070993409368851,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-csbgr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf70e6f-d95c-4efc-b8d2-6668b8e15546,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bad1e85a828db4212d1be3954fac4f42155a36b07c635f0745fb47bcd442ec3b,PodSandboxId:00b90500fd0b36f47839693699b3ec09ac67184ca116e14d3500f0ab371fa977,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1758070985856505586,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: ae22a5c6-623f-4ea1-befa-2d39c77970a9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d032c0b19a8ad6558c563cd56850028f0aacad1e0c4deac4b65ddfc459bb3350,PodSandboxId:f2302f7a05524b4ef44659a6e28d5dc41f6e223046230d77dd4965a06659f480,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1758070985756786809,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xfm6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43
532aea-4198-49b7-be80-0ad52a4970c3,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76fdc3f82622dc89965280db1741d97a2622a7b82e867302ca93d85009925ef6,PodSandboxId:87aa3c989ef4154bc281a4c6f82e1f29e8d60271edfed440407f2cb000e08afb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1758070981442206437,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-959742,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d24173e5
ce677acef80f0143144dcb7,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a350fed20eb24eef39efa2fefb9b9792828aec4e252a8256a0532feed36f056f,PodSandboxId:0182ac4bda0e63084d14e8556a2bac2038501463eee969d560aca4fa7b1b3fab,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1758070981437270861,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-959742,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a08ee9374c2bd444c5de
489987467315,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40358b986c0c41bc5d8b8627681ab0656ece95c8fbea738616cbe50924a8655a,PodSandboxId:e7b11a45907c24de35a4c7aa193f11561db321c5ea063ae9aa3151dbc3143909,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1758070981423646644,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-959742,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8fce6795925843a879c31d0e16617c4,},Annotations:map[string]str
ing{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc78de9e92bb78d4d80e93a4ad23adcf32583701c0e8968c54bdd5624cd1d463,PodSandboxId:8471cf3521ece6c718ab2ebcc3d7fe9c0e3f6e54ca9afd5243e85a3ffb571816,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1758070981322168842,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-959742,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a74b4477b8ace10008f69413002983c2,},Annotation
s:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c9cd76a0-41c1-4d1f-b1cd-931cce416ad8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 01:03:26 test-preload-959742 crio[827]: time="2025-09-17 01:03:26.361925401Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=35796995-582e-40f8-ab10-2d51c244be81 name=/runtime.v1.RuntimeService/Version
	Sep 17 01:03:26 test-preload-959742 crio[827]: time="2025-09-17 01:03:26.362000172Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=35796995-582e-40f8-ab10-2d51c244be81 name=/runtime.v1.RuntimeService/Version
	Sep 17 01:03:26 test-preload-959742 crio[827]: time="2025-09-17 01:03:26.364556704Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=97d1a5ae-6799-415f-be62-bb7180c75aff name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 01:03:26 test-preload-959742 crio[827]: time="2025-09-17 01:03:26.364944508Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8f77625d-9516-4989-a8fc-009329c12a7a name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 17 01:03:26 test-preload-959742 crio[827]: time="2025-09-17 01:03:26.365010274Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758071006364985632,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=97d1a5ae-6799-415f-be62-bb7180c75aff name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 01:03:26 test-preload-959742 crio[827]: time="2025-09-17 01:03:26.365623777Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:01d319be65d8a9d2cec237d54b8bcd5c52c7e08c218a71434ad1f5cb70fd0c56,Metadata:&PodSandboxMetadata{Name:coredns-668d6bf9bc-csbgr,Uid:abf70e6f-d95c-4efc-b8d2-6668b8e15546,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1758070993167576610,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-668d6bf9bc-csbgr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf70e6f-d95c-4efc-b8d2-6668b8e15546,k8s-app: kube-dns,pod-template-hash: 668d6bf9bc,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-09-17T01:03:05.271835618Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f2302f7a05524b4ef44659a6e28d5dc41f6e223046230d77dd4965a06659f480,Metadata:&PodSandboxMetadata{Name:kube-proxy-xfm6w,Uid:43532aea-4198-49b7-be80-0ad52a4970c3,Namespace:kube-system,A
ttempt:0,},State:SANDBOX_READY,CreatedAt:1758070985587103959,Labels:map[string]string{controller-revision-hash: 64b9dbc74b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-xfm6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43532aea-4198-49b7-be80-0ad52a4970c3,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-09-17T01:03:05.271871225Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:00b90500fd0b36f47839693699b3ec09ac67184ca116e14d3500f0ab371fa977,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:ae22a5c6-623f-4ea1-befa-2d39c77970a9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1758070985583031210,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae22a5c6-623f-4ea1-befa-2d39
c77970a9,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-09-17T01:03:05.271873930Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e7b11a45907c24de35a4c7aa193f11561db321c5ea063ae9aa3151dbc3143909,Metadata:&PodSandboxMetadata{Name:etcd-test-preload-959742,Uid:c8fce6795925843a8
79c31d0e16617c4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1758070981141000991,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-test-preload-959742,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8fce6795925843a879c31d0e16617c4,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.5:2379,kubernetes.io/config.hash: c8fce6795925843a879c31d0e16617c4,kubernetes.io/config.seen: 2025-09-17T01:03:00.323745187Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0182ac4bda0e63084d14e8556a2bac2038501463eee969d560aca4fa7b1b3fab,Metadata:&PodSandboxMetadata{Name:kube-scheduler-test-preload-959742,Uid:a08ee9374c2bd444c5de489987467315,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1758070981134021205,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-test-prel
oad-959742,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a08ee9374c2bd444c5de489987467315,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: a08ee9374c2bd444c5de489987467315,kubernetes.io/config.seen: 2025-09-17T01:03:00.278848560Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8471cf3521ece6c718ab2ebcc3d7fe9c0e3f6e54ca9afd5243e85a3ffb571816,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-test-preload-959742,Uid:a74b4477b8ace10008f69413002983c2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1758070981110915861,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-test-preload-959742,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a74b4477b8ace10008f69413002983c2,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: a74b4477b8ace10008f69413002983c2,kubernetes.io/config.seen: 2025-09-17T01:0
3:00.278846770Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:87aa3c989ef4154bc281a4c6f82e1f29e8d60271edfed440407f2cb000e08afb,Metadata:&PodSandboxMetadata{Name:kube-apiserver-test-preload-959742,Uid:7d24173e5ce677acef80f0143144dcb7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1758070981107773701,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-test-preload-959742,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d24173e5ce677acef80f0143144dcb7,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.5:8443,kubernetes.io/config.hash: 7d24173e5ce677acef80f0143144dcb7,kubernetes.io/config.seen: 2025-09-17T01:03:00.278842021Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=8f77625d-9516-4989-a8fc-009329c12a7a name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 17 01:03:26 test-preload-959742 crio[827]: time="2025-09-17 01:03:26.365670410Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6a19bec2-19a5-423c-9bce-9e1698c4097c name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 01:03:26 test-preload-959742 crio[827]: time="2025-09-17 01:03:26.366801992Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6a19bec2-19a5-423c-9bce-9e1698c4097c name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 01:03:26 test-preload-959742 crio[827]: time="2025-09-17 01:03:26.366985201Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=10881656-569f-482e-9ba8-2be1dfef3979 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 01:03:26 test-preload-959742 crio[827]: time="2025-09-17 01:03:26.367334877Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=10881656-569f-482e-9ba8-2be1dfef3979 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 01:03:26 test-preload-959742 crio[827]: time="2025-09-17 01:03:26.367974891Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f7e4ce0677f78b0eb4d06b85ceb5a9e575ecb988efc5ce4013c7c26e64b359e4,PodSandboxId:01d319be65d8a9d2cec237d54b8bcd5c52c7e08c218a71434ad1f5cb70fd0c56,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1758070993409368851,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-csbgr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf70e6f-d95c-4efc-b8d2-6668b8e15546,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bad1e85a828db4212d1be3954fac4f42155a36b07c635f0745fb47bcd442ec3b,PodSandboxId:00b90500fd0b36f47839693699b3ec09ac67184ca116e14d3500f0ab371fa977,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1758070985856505586,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: ae22a5c6-623f-4ea1-befa-2d39c77970a9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d032c0b19a8ad6558c563cd56850028f0aacad1e0c4deac4b65ddfc459bb3350,PodSandboxId:f2302f7a05524b4ef44659a6e28d5dc41f6e223046230d77dd4965a06659f480,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1758070985756786809,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xfm6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43
532aea-4198-49b7-be80-0ad52a4970c3,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76fdc3f82622dc89965280db1741d97a2622a7b82e867302ca93d85009925ef6,PodSandboxId:87aa3c989ef4154bc281a4c6f82e1f29e8d60271edfed440407f2cb000e08afb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1758070981442206437,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-959742,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d24173e5
ce677acef80f0143144dcb7,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a350fed20eb24eef39efa2fefb9b9792828aec4e252a8256a0532feed36f056f,PodSandboxId:0182ac4bda0e63084d14e8556a2bac2038501463eee969d560aca4fa7b1b3fab,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1758070981437270861,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-959742,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a08ee9374c2bd444c5de
489987467315,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40358b986c0c41bc5d8b8627681ab0656ece95c8fbea738616cbe50924a8655a,PodSandboxId:e7b11a45907c24de35a4c7aa193f11561db321c5ea063ae9aa3151dbc3143909,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1758070981423646644,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-959742,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8fce6795925843a879c31d0e16617c4,},Annotations:map[string]str
ing{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc78de9e92bb78d4d80e93a4ad23adcf32583701c0e8968c54bdd5624cd1d463,PodSandboxId:8471cf3521ece6c718ab2ebcc3d7fe9c0e3f6e54ca9afd5243e85a3ffb571816,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1758070981322168842,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-959742,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a74b4477b8ace10008f69413002983c2,},Annotation
s:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=10881656-569f-482e-9ba8-2be1dfef3979 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 01:03:26 test-preload-959742 crio[827]: time="2025-09-17 01:03:26.368278485Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f7e4ce0677f78b0eb4d06b85ceb5a9e575ecb988efc5ce4013c7c26e64b359e4,PodSandboxId:01d319be65d8a9d2cec237d54b8bcd5c52c7e08c218a71434ad1f5cb70fd0c56,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1758070993409368851,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-csbgr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf70e6f-d95c-4efc-b8d2-6668b8e15546,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bad1e85a828db4212d1be3954fac4f42155a36b07c635f0745fb47bcd442ec3b,PodSandboxId:00b90500fd0b36f47839693699b3ec09ac67184ca116e14d3500f0ab371fa977,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1758070985856505586,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: ae22a5c6-623f-4ea1-befa-2d39c77970a9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d032c0b19a8ad6558c563cd56850028f0aacad1e0c4deac4b65ddfc459bb3350,PodSandboxId:f2302f7a05524b4ef44659a6e28d5dc41f6e223046230d77dd4965a06659f480,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1758070985756786809,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xfm6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43
532aea-4198-49b7-be80-0ad52a4970c3,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76fdc3f82622dc89965280db1741d97a2622a7b82e867302ca93d85009925ef6,PodSandboxId:87aa3c989ef4154bc281a4c6f82e1f29e8d60271edfed440407f2cb000e08afb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1758070981442206437,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-959742,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d24173e5
ce677acef80f0143144dcb7,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a350fed20eb24eef39efa2fefb9b9792828aec4e252a8256a0532feed36f056f,PodSandboxId:0182ac4bda0e63084d14e8556a2bac2038501463eee969d560aca4fa7b1b3fab,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1758070981437270861,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-959742,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a08ee9374c2bd444c5de
489987467315,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40358b986c0c41bc5d8b8627681ab0656ece95c8fbea738616cbe50924a8655a,PodSandboxId:e7b11a45907c24de35a4c7aa193f11561db321c5ea063ae9aa3151dbc3143909,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1758070981423646644,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-959742,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8fce6795925843a879c31d0e16617c4,},Annotations:map[string]str
ing{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc78de9e92bb78d4d80e93a4ad23adcf32583701c0e8968c54bdd5624cd1d463,PodSandboxId:8471cf3521ece6c718ab2ebcc3d7fe9c0e3f6e54ca9afd5243e85a3ffb571816,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1758070981322168842,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-959742,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a74b4477b8ace10008f69413002983c2,},Annotation
s:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6a19bec2-19a5-423c-9bce-9e1698c4097c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f7e4ce0677f78       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   13 seconds ago      Running             coredns                   1                   01d319be65d8a       coredns-668d6bf9bc-csbgr
	bad1e85a828db       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   20 seconds ago      Running             storage-provisioner       1                   00b90500fd0b3       storage-provisioner
	d032c0b19a8ad       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08   20 seconds ago      Running             kube-proxy                1                   f2302f7a05524       kube-proxy-xfm6w
	76fdc3f82622d       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4   25 seconds ago      Running             kube-apiserver            1                   87aa3c989ef41       kube-apiserver-test-preload-959742
	a350fed20eb24       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5   25 seconds ago      Running             kube-scheduler            1                   0182ac4bda0e6       kube-scheduler-test-preload-959742
	40358b986c0c4       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   25 seconds ago      Running             etcd                      1                   e7b11a45907c2       etcd-test-preload-959742
	fc78de9e92bb7       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3   25 seconds ago      Running             kube-controller-manager   1                   8471cf3521ece       kube-controller-manager-test-preload-959742
	
	
	==> coredns [f7e4ce0677f78b0eb4d06b85ceb5a9e575ecb988efc5ce4013c7c26e64b359e4] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:59111 - 4488 "HINFO IN 6183102707032843993.48800639711458670. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.0933395s
	
	
	==> describe nodes <==
	Name:               test-preload-959742
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-959742
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=test-preload-959742
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_17T01_01_36_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Sep 2025 01:01:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-959742
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 01:03:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 01:03:14 +0000   Wed, 17 Sep 2025 01:01:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 01:03:14 +0000   Wed, 17 Sep 2025 01:01:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 01:03:14 +0000   Wed, 17 Sep 2025 01:01:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 01:03:14 +0000   Wed, 17 Sep 2025 01:03:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.5
	  Hostname:    test-preload-959742
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042708Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042708Ki
	  pods:               110
	System Info:
	  Machine ID:                 a51b9914d87d4f82bc24da123b1bba73
	  System UUID:                a51b9914-d87d-4f82-bc24-da123b1bba73
	  Boot ID:                    f724a3af-5baf-4c0f-b1ce-9ef62fa8e617
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-csbgr                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     106s
	  kube-system                 etcd-test-preload-959742                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         111s
	  kube-system                 kube-apiserver-test-preload-959742             250m (12%)    0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-controller-manager-test-preload-959742    200m (10%)    0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-proxy-xfm6w                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-scheduler-test-preload-959742             100m (5%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 104s               kube-proxy       
	  Normal   Starting                 20s                kube-proxy       
	  Normal   NodeHasSufficientMemory  111s               kubelet          Node test-preload-959742 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  111s               kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    111s               kubelet          Node test-preload-959742 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     111s               kubelet          Node test-preload-959742 status is now: NodeHasSufficientPID
	  Normal   Starting                 111s               kubelet          Starting kubelet.
	  Normal   NodeReady                110s               kubelet          Node test-preload-959742 status is now: NodeReady
	  Normal   RegisteredNode           107s               node-controller  Node test-preload-959742 event: Registered Node test-preload-959742 in Controller
	  Normal   Starting                 26s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  26s (x8 over 26s)  kubelet          Node test-preload-959742 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    26s (x8 over 26s)  kubelet          Node test-preload-959742 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     26s (x7 over 26s)  kubelet          Node test-preload-959742 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  26s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 22s                kubelet          Node test-preload-959742 has been rebooted, boot id: f724a3af-5baf-4c0f-b1ce-9ef62fa8e617
	  Normal   RegisteredNode           19s                node-controller  Node test-preload-959742 event: Registered Node test-preload-959742 in Controller
	
	
	==> dmesg <==
	[Sep17 01:02] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000065] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.002911] (rpcbind)[118]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.969677] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.084749] kauditd_printk_skb: 4 callbacks suppressed
	[Sep17 01:03] kauditd_printk_skb: 102 callbacks suppressed
	[  +5.509064] kauditd_printk_skb: 177 callbacks suppressed
	[  +0.000071] kauditd_printk_skb: 128 callbacks suppressed
	[  +5.528063] kauditd_printk_skb: 59 callbacks suppressed
	
	
	==> etcd [40358b986c0c41bc5d8b8627681ab0656ece95c8fbea738616cbe50924a8655a] <==
	{"level":"info","ts":"2025-09-17T01:03:01.808788Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1ac67dc4f6a3b478 switched to configuration voters=(1929367775279821944)"}
	{"level":"info","ts":"2025-09-17T01:03:01.809223Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"42330ef41d7a5350","local-member-id":"1ac67dc4f6a3b478","added-peer-id":"1ac67dc4f6a3b478","added-peer-peer-urls":["https://192.168.50.5:2380"]}
	{"level":"info","ts":"2025-09-17T01:03:01.809558Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"42330ef41d7a5350","local-member-id":"1ac67dc4f6a3b478","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-17T01:03:01.809754Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-17T01:03:01.812104Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-09-17T01:03:01.818035Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.50.5:2380"}
	{"level":"info","ts":"2025-09-17T01:03:01.818331Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.50.5:2380"}
	{"level":"info","ts":"2025-09-17T01:03:01.823119Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"1ac67dc4f6a3b478","initial-advertise-peer-urls":["https://192.168.50.5:2380"],"listen-peer-urls":["https://192.168.50.5:2380"],"advertise-client-urls":["https://192.168.50.5:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.5:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-09-17T01:03:01.823504Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-09-17T01:03:03.481328Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1ac67dc4f6a3b478 is starting a new election at term 2"}
	{"level":"info","ts":"2025-09-17T01:03:03.481387Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1ac67dc4f6a3b478 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-09-17T01:03:03.481407Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1ac67dc4f6a3b478 received MsgPreVoteResp from 1ac67dc4f6a3b478 at term 2"}
	{"level":"info","ts":"2025-09-17T01:03:03.481469Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1ac67dc4f6a3b478 became candidate at term 3"}
	{"level":"info","ts":"2025-09-17T01:03:03.481489Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1ac67dc4f6a3b478 received MsgVoteResp from 1ac67dc4f6a3b478 at term 3"}
	{"level":"info","ts":"2025-09-17T01:03:03.481497Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1ac67dc4f6a3b478 became leader at term 3"}
	{"level":"info","ts":"2025-09-17T01:03:03.481504Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 1ac67dc4f6a3b478 elected leader 1ac67dc4f6a3b478 at term 3"}
	{"level":"info","ts":"2025-09-17T01:03:03.487676Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"1ac67dc4f6a3b478","local-member-attributes":"{Name:test-preload-959742 ClientURLs:[https://192.168.50.5:2379]}","request-path":"/0/members/1ac67dc4f6a3b478/attributes","cluster-id":"42330ef41d7a5350","publish-timeout":"7s"}
	{"level":"info","ts":"2025-09-17T01:03:03.487686Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-17T01:03:03.487710Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-17T01:03:03.488277Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-09-17T01:03:03.488317Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-09-17T01:03:03.489024Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-09-17T01:03:03.489334Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-09-17T01:03:03.490298Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.5:2379"}
	{"level":"info","ts":"2025-09-17T01:03:03.490939Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 01:03:26 up 0 min,  0 users,  load average: 0.88, 0.27, 0.09
	Linux test-preload-959742 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Sep  9 02:24:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [76fdc3f82622dc89965280db1741d97a2622a7b82e867302ca93d85009925ef6] <==
	I0917 01:03:04.650795       1 autoregister_controller.go:144] Starting autoregister controller
	I0917 01:03:04.650815       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0917 01:03:04.700704       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0917 01:03:04.700932       1 shared_informer.go:320] Caches are synced for configmaps
	I0917 01:03:04.702527       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0917 01:03:04.710895       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0917 01:03:04.714035       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0917 01:03:04.714098       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0917 01:03:04.714182       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0917 01:03:04.714381       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0917 01:03:04.714906       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0917 01:03:04.718744       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0917 01:03:04.718779       1 policy_source.go:240] refreshing policies
	E0917 01:03:04.722878       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0917 01:03:04.733584       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0917 01:03:04.750899       1 cache.go:39] Caches are synced for autoregister controller
	I0917 01:03:05.376051       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0917 01:03:05.611986       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0917 01:03:06.743769       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0917 01:03:06.799723       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0917 01:03:06.863845       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0917 01:03:06.875253       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0917 01:03:08.252830       1 controller.go:615] quota admission added evaluator for: endpoints
	I0917 01:03:08.304816       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0917 01:03:08.352708       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [fc78de9e92bb78d4d80e93a4ad23adcf32583701c0e8968c54bdd5624cd1d463] <==
	I0917 01:03:07.903056       1 shared_informer.go:320] Caches are synced for attach detach
	I0917 01:03:07.908936       1 shared_informer.go:320] Caches are synced for disruption
	I0917 01:03:07.908950       1 shared_informer.go:320] Caches are synced for taint
	I0917 01:03:07.909057       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0917 01:03:07.909149       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0917 01:03:07.909282       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="test-preload-959742"
	I0917 01:03:07.909361       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0917 01:03:07.919569       1 shared_informer.go:320] Caches are synced for stateful set
	I0917 01:03:07.919804       1 shared_informer.go:320] Caches are synced for HPA
	I0917 01:03:07.920034       1 shared_informer.go:320] Caches are synced for expand
	I0917 01:03:07.924816       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0917 01:03:07.925335       1 shared_informer.go:320] Caches are synced for cronjob
	I0917 01:03:07.928935       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0917 01:03:07.929124       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0917 01:03:07.944518       1 shared_informer.go:320] Caches are synced for TTL
	I0917 01:03:07.946521       1 shared_informer.go:320] Caches are synced for garbage collector
	I0917 01:03:07.951634       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-959742"
	I0917 01:03:08.311548       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="415.627435ms"
	I0917 01:03:08.311921       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="120.363µs"
	I0917 01:03:13.525149       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="58.957µs"
	I0917 01:03:14.824073       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-959742"
	I0917 01:03:14.839535       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-959742"
	I0917 01:03:17.910859       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0917 01:03:23.543824       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="21.422184ms"
	I0917 01:03:23.544038       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="113.615µs"
	
	
	==> kube-proxy [d032c0b19a8ad6558c563cd56850028f0aacad1e0c4deac4b65ddfc459bb3350] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0917 01:03:06.094565       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0917 01:03:06.106058       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.50.5"]
	E0917 01:03:06.106171       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 01:03:06.148212       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0917 01:03:06.148278       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0917 01:03:06.148319       1 server_linux.go:170] "Using iptables Proxier"
	I0917 01:03:06.151745       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 01:03:06.152222       1 server.go:497] "Version info" version="v1.32.0"
	I0917 01:03:06.152254       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 01:03:06.153875       1 config.go:199] "Starting service config controller"
	I0917 01:03:06.153946       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0917 01:03:06.153991       1 config.go:105] "Starting endpoint slice config controller"
	I0917 01:03:06.154013       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0917 01:03:06.155252       1 config.go:329] "Starting node config controller"
	I0917 01:03:06.155279       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0917 01:03:06.254209       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0917 01:03:06.254274       1 shared_informer.go:320] Caches are synced for service config
	I0917 01:03:06.255830       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a350fed20eb24eef39efa2fefb9b9792828aec4e252a8256a0532feed36f056f] <==
	I0917 01:03:02.250863       1 serving.go:386] Generated self-signed cert in-memory
	W0917 01:03:04.634989       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0917 01:03:04.635033       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found, role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found]
	W0917 01:03:04.635042       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0917 01:03:04.635052       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0917 01:03:04.681399       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.0"
	I0917 01:03:04.681485       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 01:03:04.687396       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 01:03:04.687512       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0917 01:03:04.687632       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0917 01:03:04.687522       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0917 01:03:04.788164       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 17 01:03:04 test-preload-959742 kubelet[1161]: I0917 01:03:04.763214    1161 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 17 01:03:04 test-preload-959742 kubelet[1161]: I0917 01:03:04.765272    1161 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 17 01:03:04 test-preload-959742 kubelet[1161]: I0917 01:03:04.767116    1161 setters.go:602] "Node became not ready" node="test-preload-959742" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-17T01:03:04Z","lastTransitionTime":"2025-09-17T01:03:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"}
	Sep 17 01:03:04 test-preload-959742 kubelet[1161]: E0917 01:03:04.776655    1161 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-test-preload-959742\" already exists" pod="kube-system/kube-apiserver-test-preload-959742"
	Sep 17 01:03:05 test-preload-959742 kubelet[1161]: I0917 01:03:05.267154    1161 apiserver.go:52] "Watching apiserver"
	Sep 17 01:03:05 test-preload-959742 kubelet[1161]: E0917 01:03:05.274372    1161 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-csbgr" podUID="abf70e6f-d95c-4efc-b8d2-6668b8e15546"
	Sep 17 01:03:05 test-preload-959742 kubelet[1161]: I0917 01:03:05.296893    1161 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Sep 17 01:03:05 test-preload-959742 kubelet[1161]: E0917 01:03:05.348353    1161 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Sep 17 01:03:05 test-preload-959742 kubelet[1161]: I0917 01:03:05.370096    1161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/43532aea-4198-49b7-be80-0ad52a4970c3-lib-modules\") pod \"kube-proxy-xfm6w\" (UID: \"43532aea-4198-49b7-be80-0ad52a4970c3\") " pod="kube-system/kube-proxy-xfm6w"
	Sep 17 01:03:05 test-preload-959742 kubelet[1161]: I0917 01:03:05.370134    1161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ae22a5c6-623f-4ea1-befa-2d39c77970a9-tmp\") pod \"storage-provisioner\" (UID: \"ae22a5c6-623f-4ea1-befa-2d39c77970a9\") " pod="kube-system/storage-provisioner"
	Sep 17 01:03:05 test-preload-959742 kubelet[1161]: I0917 01:03:05.370154    1161 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/43532aea-4198-49b7-be80-0ad52a4970c3-xtables-lock\") pod \"kube-proxy-xfm6w\" (UID: \"43532aea-4198-49b7-be80-0ad52a4970c3\") " pod="kube-system/kube-proxy-xfm6w"
	Sep 17 01:03:05 test-preload-959742 kubelet[1161]: E0917 01:03:05.370547    1161 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 17 01:03:05 test-preload-959742 kubelet[1161]: E0917 01:03:05.370619    1161 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/abf70e6f-d95c-4efc-b8d2-6668b8e15546-config-volume podName:abf70e6f-d95c-4efc-b8d2-6668b8e15546 nodeName:}" failed. No retries permitted until 2025-09-17 01:03:05.870599016 +0000 UTC m=+5.711829413 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/abf70e6f-d95c-4efc-b8d2-6668b8e15546-config-volume") pod "coredns-668d6bf9bc-csbgr" (UID: "abf70e6f-d95c-4efc-b8d2-6668b8e15546") : object "kube-system"/"coredns" not registered
	Sep 17 01:03:05 test-preload-959742 kubelet[1161]: E0917 01:03:05.873737    1161 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 17 01:03:05 test-preload-959742 kubelet[1161]: E0917 01:03:05.873890    1161 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/abf70e6f-d95c-4efc-b8d2-6668b8e15546-config-volume podName:abf70e6f-d95c-4efc-b8d2-6668b8e15546 nodeName:}" failed. No retries permitted until 2025-09-17 01:03:06.873874907 +0000 UTC m=+6.715105290 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/abf70e6f-d95c-4efc-b8d2-6668b8e15546-config-volume") pod "coredns-668d6bf9bc-csbgr" (UID: "abf70e6f-d95c-4efc-b8d2-6668b8e15546") : object "kube-system"/"coredns" not registered
	Sep 17 01:03:06 test-preload-959742 kubelet[1161]: E0917 01:03:06.882022    1161 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 17 01:03:06 test-preload-959742 kubelet[1161]: E0917 01:03:06.882122    1161 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/abf70e6f-d95c-4efc-b8d2-6668b8e15546-config-volume podName:abf70e6f-d95c-4efc-b8d2-6668b8e15546 nodeName:}" failed. No retries permitted until 2025-09-17 01:03:08.88210736 +0000 UTC m=+8.723337754 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/abf70e6f-d95c-4efc-b8d2-6668b8e15546-config-volume") pod "coredns-668d6bf9bc-csbgr" (UID: "abf70e6f-d95c-4efc-b8d2-6668b8e15546") : object "kube-system"/"coredns" not registered
	Sep 17 01:03:07 test-preload-959742 kubelet[1161]: E0917 01:03:07.362008    1161 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-csbgr" podUID="abf70e6f-d95c-4efc-b8d2-6668b8e15546"
	Sep 17 01:03:08 test-preload-959742 kubelet[1161]: E0917 01:03:08.900683    1161 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 17 01:03:08 test-preload-959742 kubelet[1161]: E0917 01:03:08.900795    1161 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/abf70e6f-d95c-4efc-b8d2-6668b8e15546-config-volume podName:abf70e6f-d95c-4efc-b8d2-6668b8e15546 nodeName:}" failed. No retries permitted until 2025-09-17 01:03:12.900779268 +0000 UTC m=+12.742009651 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/abf70e6f-d95c-4efc-b8d2-6668b8e15546-config-volume") pod "coredns-668d6bf9bc-csbgr" (UID: "abf70e6f-d95c-4efc-b8d2-6668b8e15546") : object "kube-system"/"coredns" not registered
	Sep 17 01:03:09 test-preload-959742 kubelet[1161]: E0917 01:03:09.361150    1161 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-csbgr" podUID="abf70e6f-d95c-4efc-b8d2-6668b8e15546"
	Sep 17 01:03:10 test-preload-959742 kubelet[1161]: E0917 01:03:10.350536    1161 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758070990350257622,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 01:03:10 test-preload-959742 kubelet[1161]: E0917 01:03:10.350558    1161 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758070990350257622,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 01:03:20 test-preload-959742 kubelet[1161]: E0917 01:03:20.354609    1161 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758071000353832563,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 17 01:03:20 test-preload-959742 kubelet[1161]: E0917 01:03:20.354634    1161 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758071000353832563,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [bad1e85a828db4212d1be3954fac4f42155a36b07c635f0745fb47bcd442ec3b] <==
	I0917 01:03:06.022659       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-959742 -n test-preload-959742
helpers_test.go:269: (dbg) Run:  kubectl --context test-preload-959742 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-959742" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-959742
--- FAIL: TestPreload (163.49s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (99.34s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-003341 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-003341 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m34.437806498s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-003341] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21550
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21550-141589/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-141589/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-003341" primary control-plane node in "pause-003341" cluster
	* Preparing Kubernetes v1.34.0 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-003341" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 01:08:03.534309  183156 out.go:360] Setting OutFile to fd 1 ...
	I0917 01:08:03.534436  183156 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 01:08:03.534445  183156 out.go:374] Setting ErrFile to fd 2...
	I0917 01:08:03.534449  183156 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 01:08:03.534681  183156 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-141589/.minikube/bin
	I0917 01:08:03.535134  183156 out.go:368] Setting JSON to false
	I0917 01:08:03.536034  183156 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":13828,"bootTime":1758057456,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 01:08:03.536135  183156 start.go:140] virtualization: kvm guest
	I0917 01:08:03.537921  183156 out.go:179] * [pause-003341] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0917 01:08:03.539053  183156 out.go:179]   - MINIKUBE_LOCATION=21550
	I0917 01:08:03.539100  183156 notify.go:220] Checking for updates...
	I0917 01:08:03.541679  183156 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 01:08:03.542885  183156 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-141589/kubeconfig
	I0917 01:08:03.544003  183156 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-141589/.minikube
	I0917 01:08:03.545135  183156 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 01:08:03.546215  183156 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 01:08:03.547752  183156 config.go:182] Loaded profile config "pause-003341": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 01:08:03.548200  183156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 01:08:03.548325  183156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 01:08:03.563191  183156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44563
	I0917 01:08:03.563798  183156 main.go:141] libmachine: () Calling .GetVersion
	I0917 01:08:03.564428  183156 main.go:141] libmachine: Using API Version  1
	I0917 01:08:03.564458  183156 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 01:08:03.564876  183156 main.go:141] libmachine: () Calling .GetMachineName
	I0917 01:08:03.565097  183156 main.go:141] libmachine: (pause-003341) Calling .DriverName
	I0917 01:08:03.565376  183156 driver.go:421] Setting default libvirt URI to qemu:///system
	I0917 01:08:03.565721  183156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 01:08:03.565764  183156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 01:08:03.580213  183156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43285
	I0917 01:08:03.580848  183156 main.go:141] libmachine: () Calling .GetVersion
	I0917 01:08:03.581365  183156 main.go:141] libmachine: Using API Version  1
	I0917 01:08:03.581398  183156 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 01:08:03.581764  183156 main.go:141] libmachine: () Calling .GetMachineName
	I0917 01:08:03.581988  183156 main.go:141] libmachine: (pause-003341) Calling .DriverName
	I0917 01:08:03.616843  183156 out.go:179] * Using the kvm2 driver based on existing profile
	I0917 01:08:03.617999  183156 start.go:304] selected driver: kvm2
	I0917 01:08:03.618018  183156 start.go:918] validating driver "kvm2" against &{Name:pause-003341 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.37.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:pause-003341 Namespace:def
ault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.157 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false po
d-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 01:08:03.618195  183156 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 01:08:03.618603  183156 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 01:08:03.618694  183156 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21550-141589/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0917 01:08:03.634076  183156 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0917 01:08:03.634932  183156 cni.go:84] Creating CNI manager for ""
	I0917 01:08:03.634990  183156 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 01:08:03.635048  183156 start.go:348] cluster config:
	{Name:pause-003341 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.37.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:pause-003341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.157 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false
registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 01:08:03.635277  183156 iso.go:125] acquiring lock: {Name:mkbc497934aeda3bf1eaa3e96176da91d2f10b30 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 01:08:03.636818  183156 out.go:179] * Starting "pause-003341" primary control-plane node in "pause-003341" cluster
	I0917 01:08:03.637917  183156 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 01:08:03.637982  183156 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21550-141589/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0917 01:08:03.637998  183156 cache.go:58] Caching tarball of preloaded images
	I0917 01:08:03.638109  183156 preload.go:172] Found /home/jenkins/minikube-integration/21550-141589/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 01:08:03.638129  183156 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0917 01:08:03.638301  183156 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/pause-003341/config.json ...
	I0917 01:08:03.638559  183156 start.go:360] acquireMachinesLock for pause-003341: {Name:mk4898504d31cc722a10b1787754ef8ecd27d0ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 01:08:33.818449  183156 start.go:364] duration metric: took 30.179843835s to acquireMachinesLock for "pause-003341"
	I0917 01:08:33.818519  183156 start.go:96] Skipping create...Using existing machine configuration
	I0917 01:08:33.818528  183156 fix.go:54] fixHost starting: 
	I0917 01:08:33.819206  183156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 01:08:33.819266  183156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 01:08:33.835981  183156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37795
	I0917 01:08:33.836636  183156 main.go:141] libmachine: () Calling .GetVersion
	I0917 01:08:33.837338  183156 main.go:141] libmachine: Using API Version  1
	I0917 01:08:33.837383  183156 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 01:08:33.837823  183156 main.go:141] libmachine: () Calling .GetMachineName
	I0917 01:08:33.838064  183156 main.go:141] libmachine: (pause-003341) Calling .DriverName
	I0917 01:08:33.838216  183156 main.go:141] libmachine: (pause-003341) Calling .GetState
	I0917 01:08:33.840318  183156 fix.go:112] recreateIfNeeded on pause-003341: state=Running err=<nil>
	W0917 01:08:33.840340  183156 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 01:08:33.842426  183156 out.go:252] * Updating the running kvm2 "pause-003341" VM ...
	I0917 01:08:33.842488  183156 machine.go:93] provisionDockerMachine start ...
	I0917 01:08:33.842507  183156 main.go:141] libmachine: (pause-003341) Calling .DriverName
	I0917 01:08:33.842796  183156 main.go:141] libmachine: (pause-003341) Calling .GetSSHHostname
	I0917 01:08:33.846485  183156 main.go:141] libmachine: (pause-003341) DBG | domain pause-003341 has defined MAC address 52:54:00:04:1e:ab in network mk-pause-003341
	I0917 01:08:33.847019  183156 main.go:141] libmachine: (pause-003341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:1e:ab", ip: ""} in network mk-pause-003341: {Iface:virbr5 ExpiryTime:2025-09-17 02:06:57 +0000 UTC Type:0 Mac:52:54:00:04:1e:ab Iaid: IPaddr:192.168.83.157 Prefix:24 Hostname:pause-003341 Clientid:01:52:54:00:04:1e:ab}
	I0917 01:08:33.847053  183156 main.go:141] libmachine: (pause-003341) DBG | domain pause-003341 has defined IP address 192.168.83.157 and MAC address 52:54:00:04:1e:ab in network mk-pause-003341
	I0917 01:08:33.847255  183156 main.go:141] libmachine: (pause-003341) Calling .GetSSHPort
	I0917 01:08:33.847429  183156 main.go:141] libmachine: (pause-003341) Calling .GetSSHKeyPath
	I0917 01:08:33.847639  183156 main.go:141] libmachine: (pause-003341) Calling .GetSSHKeyPath
	I0917 01:08:33.847804  183156 main.go:141] libmachine: (pause-003341) Calling .GetSSHUsername
	I0917 01:08:33.848016  183156 main.go:141] libmachine: Using SSH client type: native
	I0917 01:08:33.848283  183156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.83.157 22 <nil> <nil>}
	I0917 01:08:33.848296  183156 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 01:08:33.964354  183156 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-003341
	
	I0917 01:08:33.964395  183156 main.go:141] libmachine: (pause-003341) Calling .GetMachineName
	I0917 01:08:33.964726  183156 buildroot.go:166] provisioning hostname "pause-003341"
	I0917 01:08:33.964789  183156 main.go:141] libmachine: (pause-003341) Calling .GetMachineName
	I0917 01:08:33.965044  183156 main.go:141] libmachine: (pause-003341) Calling .GetSSHHostname
	I0917 01:08:33.969312  183156 main.go:141] libmachine: (pause-003341) DBG | domain pause-003341 has defined MAC address 52:54:00:04:1e:ab in network mk-pause-003341
	I0917 01:08:33.969876  183156 main.go:141] libmachine: (pause-003341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:1e:ab", ip: ""} in network mk-pause-003341: {Iface:virbr5 ExpiryTime:2025-09-17 02:06:57 +0000 UTC Type:0 Mac:52:54:00:04:1e:ab Iaid: IPaddr:192.168.83.157 Prefix:24 Hostname:pause-003341 Clientid:01:52:54:00:04:1e:ab}
	I0917 01:08:33.969926  183156 main.go:141] libmachine: (pause-003341) DBG | domain pause-003341 has defined IP address 192.168.83.157 and MAC address 52:54:00:04:1e:ab in network mk-pause-003341
	I0917 01:08:33.970158  183156 main.go:141] libmachine: (pause-003341) Calling .GetSSHPort
	I0917 01:08:33.970389  183156 main.go:141] libmachine: (pause-003341) Calling .GetSSHKeyPath
	I0917 01:08:33.970603  183156 main.go:141] libmachine: (pause-003341) Calling .GetSSHKeyPath
	I0917 01:08:33.970795  183156 main.go:141] libmachine: (pause-003341) Calling .GetSSHUsername
	I0917 01:08:33.971047  183156 main.go:141] libmachine: Using SSH client type: native
	I0917 01:08:33.971342  183156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.83.157 22 <nil> <nil>}
	I0917 01:08:33.971365  183156 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-003341 && echo "pause-003341" | sudo tee /etc/hostname
	I0917 01:08:34.110457  183156 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-003341
	
	I0917 01:08:34.110518  183156 main.go:141] libmachine: (pause-003341) Calling .GetSSHHostname
	I0917 01:08:34.116119  183156 main.go:141] libmachine: (pause-003341) DBG | domain pause-003341 has defined MAC address 52:54:00:04:1e:ab in network mk-pause-003341
	I0917 01:08:34.116667  183156 main.go:141] libmachine: (pause-003341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:1e:ab", ip: ""} in network mk-pause-003341: {Iface:virbr5 ExpiryTime:2025-09-17 02:06:57 +0000 UTC Type:0 Mac:52:54:00:04:1e:ab Iaid: IPaddr:192.168.83.157 Prefix:24 Hostname:pause-003341 Clientid:01:52:54:00:04:1e:ab}
	I0917 01:08:34.116707  183156 main.go:141] libmachine: (pause-003341) DBG | domain pause-003341 has defined IP address 192.168.83.157 and MAC address 52:54:00:04:1e:ab in network mk-pause-003341
	I0917 01:08:34.116907  183156 main.go:141] libmachine: (pause-003341) Calling .GetSSHPort
	I0917 01:08:34.117334  183156 main.go:141] libmachine: (pause-003341) Calling .GetSSHKeyPath
	I0917 01:08:34.117547  183156 main.go:141] libmachine: (pause-003341) Calling .GetSSHKeyPath
	I0917 01:08:34.117697  183156 main.go:141] libmachine: (pause-003341) Calling .GetSSHUsername
	I0917 01:08:34.117913  183156 main.go:141] libmachine: Using SSH client type: native
	I0917 01:08:34.118225  183156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.83.157 22 <nil> <nil>}
	I0917 01:08:34.118255  183156 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-003341' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-003341/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-003341' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 01:08:34.240447  183156 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 01:08:34.240490  183156 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21550-141589/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-141589/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-141589/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-141589/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-141589/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-141589/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-141589/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-141589/.minikube}
	I0917 01:08:34.240522  183156 buildroot.go:174] setting up certificates
	I0917 01:08:34.240536  183156 provision.go:84] configureAuth start
	I0917 01:08:34.240551  183156 main.go:141] libmachine: (pause-003341) Calling .GetMachineName
	I0917 01:08:34.240975  183156 main.go:141] libmachine: (pause-003341) Calling .GetIP
	I0917 01:08:34.244959  183156 main.go:141] libmachine: (pause-003341) DBG | domain pause-003341 has defined MAC address 52:54:00:04:1e:ab in network mk-pause-003341
	I0917 01:08:34.245359  183156 main.go:141] libmachine: (pause-003341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:1e:ab", ip: ""} in network mk-pause-003341: {Iface:virbr5 ExpiryTime:2025-09-17 02:06:57 +0000 UTC Type:0 Mac:52:54:00:04:1e:ab Iaid: IPaddr:192.168.83.157 Prefix:24 Hostname:pause-003341 Clientid:01:52:54:00:04:1e:ab}
	I0917 01:08:34.245388  183156 main.go:141] libmachine: (pause-003341) DBG | domain pause-003341 has defined IP address 192.168.83.157 and MAC address 52:54:00:04:1e:ab in network mk-pause-003341
	I0917 01:08:34.245627  183156 main.go:141] libmachine: (pause-003341) Calling .GetSSHHostname
	I0917 01:08:34.249263  183156 main.go:141] libmachine: (pause-003341) DBG | domain pause-003341 has defined MAC address 52:54:00:04:1e:ab in network mk-pause-003341
	I0917 01:08:34.249671  183156 main.go:141] libmachine: (pause-003341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:1e:ab", ip: ""} in network mk-pause-003341: {Iface:virbr5 ExpiryTime:2025-09-17 02:06:57 +0000 UTC Type:0 Mac:52:54:00:04:1e:ab Iaid: IPaddr:192.168.83.157 Prefix:24 Hostname:pause-003341 Clientid:01:52:54:00:04:1e:ab}
	I0917 01:08:34.249711  183156 main.go:141] libmachine: (pause-003341) DBG | domain pause-003341 has defined IP address 192.168.83.157 and MAC address 52:54:00:04:1e:ab in network mk-pause-003341
	I0917 01:08:34.249994  183156 provision.go:143] copyHostCerts
	I0917 01:08:34.250074  183156 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-141589/.minikube/key.pem, removing ...
	I0917 01:08:34.250098  183156 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-141589/.minikube/key.pem
	I0917 01:08:34.250170  183156 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-141589/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-141589/.minikube/key.pem (1675 bytes)
	I0917 01:08:34.250304  183156 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-141589/.minikube/ca.pem, removing ...
	I0917 01:08:34.250315  183156 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-141589/.minikube/ca.pem
	I0917 01:08:34.250350  183156 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-141589/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-141589/.minikube/ca.pem (1078 bytes)
	I0917 01:08:34.250427  183156 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-141589/.minikube/cert.pem, removing ...
	I0917 01:08:34.250443  183156 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-141589/.minikube/cert.pem
	I0917 01:08:34.250474  183156 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-141589/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-141589/.minikube/cert.pem (1123 bytes)
	I0917 01:08:34.250539  183156 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-141589/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-141589/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-141589/.minikube/certs/ca-key.pem org=jenkins.pause-003341 san=[127.0.0.1 192.168.83.157 localhost minikube pause-003341]
	I0917 01:08:34.650557  183156 provision.go:177] copyRemoteCerts
	I0917 01:08:34.650627  183156 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 01:08:34.650657  183156 main.go:141] libmachine: (pause-003341) Calling .GetSSHHostname
	I0917 01:08:34.654947  183156 main.go:141] libmachine: (pause-003341) DBG | domain pause-003341 has defined MAC address 52:54:00:04:1e:ab in network mk-pause-003341
	I0917 01:08:34.655336  183156 main.go:141] libmachine: (pause-003341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:1e:ab", ip: ""} in network mk-pause-003341: {Iface:virbr5 ExpiryTime:2025-09-17 02:06:57 +0000 UTC Type:0 Mac:52:54:00:04:1e:ab Iaid: IPaddr:192.168.83.157 Prefix:24 Hostname:pause-003341 Clientid:01:52:54:00:04:1e:ab}
	I0917 01:08:34.655389  183156 main.go:141] libmachine: (pause-003341) DBG | domain pause-003341 has defined IP address 192.168.83.157 and MAC address 52:54:00:04:1e:ab in network mk-pause-003341
	I0917 01:08:34.655637  183156 main.go:141] libmachine: (pause-003341) Calling .GetSSHPort
	I0917 01:08:34.655904  183156 main.go:141] libmachine: (pause-003341) Calling .GetSSHKeyPath
	I0917 01:08:34.656062  183156 main.go:141] libmachine: (pause-003341) Calling .GetSSHUsername
	I0917 01:08:34.656202  183156 sshutil.go:53] new ssh client: &{IP:192.168.83.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21550-141589/.minikube/machines/pause-003341/id_rsa Username:docker}
	I0917 01:08:34.751696  183156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-141589/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 01:08:34.793677  183156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-141589/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0917 01:08:34.843971  183156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-141589/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 01:08:34.881722  183156 provision.go:87] duration metric: took 641.164736ms to configureAuth
	I0917 01:08:34.881769  183156 buildroot.go:189] setting minikube options for container-runtime
	I0917 01:08:34.882094  183156 config.go:182] Loaded profile config "pause-003341": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 01:08:34.882207  183156 main.go:141] libmachine: (pause-003341) Calling .GetSSHHostname
	I0917 01:08:34.886225  183156 main.go:141] libmachine: (pause-003341) DBG | domain pause-003341 has defined MAC address 52:54:00:04:1e:ab in network mk-pause-003341
	I0917 01:08:34.886824  183156 main.go:141] libmachine: (pause-003341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:1e:ab", ip: ""} in network mk-pause-003341: {Iface:virbr5 ExpiryTime:2025-09-17 02:06:57 +0000 UTC Type:0 Mac:52:54:00:04:1e:ab Iaid: IPaddr:192.168.83.157 Prefix:24 Hostname:pause-003341 Clientid:01:52:54:00:04:1e:ab}
	I0917 01:08:34.886893  183156 main.go:141] libmachine: (pause-003341) DBG | domain pause-003341 has defined IP address 192.168.83.157 and MAC address 52:54:00:04:1e:ab in network mk-pause-003341
	I0917 01:08:34.887178  183156 main.go:141] libmachine: (pause-003341) Calling .GetSSHPort
	I0917 01:08:34.887471  183156 main.go:141] libmachine: (pause-003341) Calling .GetSSHKeyPath
	I0917 01:08:34.887681  183156 main.go:141] libmachine: (pause-003341) Calling .GetSSHKeyPath
	I0917 01:08:34.887942  183156 main.go:141] libmachine: (pause-003341) Calling .GetSSHUsername
	I0917 01:08:34.888189  183156 main.go:141] libmachine: Using SSH client type: native
	I0917 01:08:34.888428  183156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.83.157 22 <nil> <nil>}
	I0917 01:08:34.888450  183156 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 01:08:40.606689  183156 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 01:08:40.606720  183156 machine.go:96] duration metric: took 6.764220749s to provisionDockerMachine
	I0917 01:08:40.606735  183156 start.go:293] postStartSetup for "pause-003341" (driver="kvm2")
	I0917 01:08:40.606749  183156 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 01:08:40.606777  183156 main.go:141] libmachine: (pause-003341) Calling .DriverName
	I0917 01:08:40.607196  183156 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 01:08:40.607236  183156 main.go:141] libmachine: (pause-003341) Calling .GetSSHHostname
	I0917 01:08:40.611150  183156 main.go:141] libmachine: (pause-003341) DBG | domain pause-003341 has defined MAC address 52:54:00:04:1e:ab in network mk-pause-003341
	I0917 01:08:40.611639  183156 main.go:141] libmachine: (pause-003341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:1e:ab", ip: ""} in network mk-pause-003341: {Iface:virbr5 ExpiryTime:2025-09-17 02:06:57 +0000 UTC Type:0 Mac:52:54:00:04:1e:ab Iaid: IPaddr:192.168.83.157 Prefix:24 Hostname:pause-003341 Clientid:01:52:54:00:04:1e:ab}
	I0917 01:08:40.611675  183156 main.go:141] libmachine: (pause-003341) DBG | domain pause-003341 has defined IP address 192.168.83.157 and MAC address 52:54:00:04:1e:ab in network mk-pause-003341
	I0917 01:08:40.612019  183156 main.go:141] libmachine: (pause-003341) Calling .GetSSHPort
	I0917 01:08:40.612254  183156 main.go:141] libmachine: (pause-003341) Calling .GetSSHKeyPath
	I0917 01:08:40.612455  183156 main.go:141] libmachine: (pause-003341) Calling .GetSSHUsername
	I0917 01:08:40.612635  183156 sshutil.go:53] new ssh client: &{IP:192.168.83.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21550-141589/.minikube/machines/pause-003341/id_rsa Username:docker}
	I0917 01:08:40.701610  183156 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 01:08:40.707915  183156 info.go:137] Remote host: Buildroot 2025.02
	I0917 01:08:40.707958  183156 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-141589/.minikube/addons for local assets ...
	I0917 01:08:40.708092  183156 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-141589/.minikube/files for local assets ...
	I0917 01:08:40.708226  183156 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-141589/.minikube/files/etc/ssl/certs/1455302.pem -> 1455302.pem in /etc/ssl/certs
	I0917 01:08:40.708343  183156 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0917 01:08:40.722451  183156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-141589/.minikube/files/etc/ssl/certs/1455302.pem --> /etc/ssl/certs/1455302.pem (1708 bytes)
	I0917 01:08:40.759610  183156 start.go:296] duration metric: took 152.856694ms for postStartSetup
	I0917 01:08:40.759662  183156 fix.go:56] duration metric: took 6.94113498s for fixHost
	I0917 01:08:40.759691  183156 main.go:141] libmachine: (pause-003341) Calling .GetSSHHostname
	I0917 01:08:40.763392  183156 main.go:141] libmachine: (pause-003341) DBG | domain pause-003341 has defined MAC address 52:54:00:04:1e:ab in network mk-pause-003341
	I0917 01:08:40.763905  183156 main.go:141] libmachine: (pause-003341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:1e:ab", ip: ""} in network mk-pause-003341: {Iface:virbr5 ExpiryTime:2025-09-17 02:06:57 +0000 UTC Type:0 Mac:52:54:00:04:1e:ab Iaid: IPaddr:192.168.83.157 Prefix:24 Hostname:pause-003341 Clientid:01:52:54:00:04:1e:ab}
	I0917 01:08:40.763943  183156 main.go:141] libmachine: (pause-003341) DBG | domain pause-003341 has defined IP address 192.168.83.157 and MAC address 52:54:00:04:1e:ab in network mk-pause-003341
	I0917 01:08:40.764293  183156 main.go:141] libmachine: (pause-003341) Calling .GetSSHPort
	I0917 01:08:40.764518  183156 main.go:141] libmachine: (pause-003341) Calling .GetSSHKeyPath
	I0917 01:08:40.764755  183156 main.go:141] libmachine: (pause-003341) Calling .GetSSHKeyPath
	I0917 01:08:40.764999  183156 main.go:141] libmachine: (pause-003341) Calling .GetSSHUsername
	I0917 01:08:40.765199  183156 main.go:141] libmachine: Using SSH client type: native
	I0917 01:08:40.765491  183156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.83.157 22 <nil> <nil>}
	I0917 01:08:40.765510  183156 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0917 01:08:40.884474  183156 main.go:141] libmachine: SSH cmd err, output: <nil>: 1758071320.877321687
	
	I0917 01:08:40.884511  183156 fix.go:216] guest clock: 1758071320.877321687
	I0917 01:08:40.884522  183156 fix.go:229] Guest: 2025-09-17 01:08:40.877321687 +0000 UTC Remote: 2025-09-17 01:08:40.759667872 +0000 UTC m=+37.267682715 (delta=117.653815ms)
	I0917 01:08:40.884581  183156 fix.go:200] guest clock delta is within tolerance: 117.653815ms
	I0917 01:08:40.884590  183156 start.go:83] releasing machines lock for "pause-003341", held for 7.066097947s
	I0917 01:08:40.884624  183156 main.go:141] libmachine: (pause-003341) Calling .DriverName
	I0917 01:08:40.884954  183156 main.go:141] libmachine: (pause-003341) Calling .GetIP
	I0917 01:08:40.888216  183156 main.go:141] libmachine: (pause-003341) DBG | domain pause-003341 has defined MAC address 52:54:00:04:1e:ab in network mk-pause-003341
	I0917 01:08:40.888770  183156 main.go:141] libmachine: (pause-003341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:1e:ab", ip: ""} in network mk-pause-003341: {Iface:virbr5 ExpiryTime:2025-09-17 02:06:57 +0000 UTC Type:0 Mac:52:54:00:04:1e:ab Iaid: IPaddr:192.168.83.157 Prefix:24 Hostname:pause-003341 Clientid:01:52:54:00:04:1e:ab}
	I0917 01:08:40.888812  183156 main.go:141] libmachine: (pause-003341) DBG | domain pause-003341 has defined IP address 192.168.83.157 and MAC address 52:54:00:04:1e:ab in network mk-pause-003341
	I0917 01:08:40.888998  183156 main.go:141] libmachine: (pause-003341) Calling .DriverName
	I0917 01:08:40.889550  183156 main.go:141] libmachine: (pause-003341) Calling .DriverName
	I0917 01:08:40.889756  183156 main.go:141] libmachine: (pause-003341) Calling .DriverName
	I0917 01:08:40.889899  183156 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 01:08:40.889961  183156 main.go:141] libmachine: (pause-003341) Calling .GetSSHHostname
	I0917 01:08:40.890075  183156 ssh_runner.go:195] Run: cat /version.json
	I0917 01:08:40.890112  183156 main.go:141] libmachine: (pause-003341) Calling .GetSSHHostname
	I0917 01:08:40.894204  183156 main.go:141] libmachine: (pause-003341) DBG | domain pause-003341 has defined MAC address 52:54:00:04:1e:ab in network mk-pause-003341
	I0917 01:08:40.894259  183156 main.go:141] libmachine: (pause-003341) DBG | domain pause-003341 has defined MAC address 52:54:00:04:1e:ab in network mk-pause-003341
	I0917 01:08:40.894761  183156 main.go:141] libmachine: (pause-003341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:1e:ab", ip: ""} in network mk-pause-003341: {Iface:virbr5 ExpiryTime:2025-09-17 02:06:57 +0000 UTC Type:0 Mac:52:54:00:04:1e:ab Iaid: IPaddr:192.168.83.157 Prefix:24 Hostname:pause-003341 Clientid:01:52:54:00:04:1e:ab}
	I0917 01:08:40.894795  183156 main.go:141] libmachine: (pause-003341) DBG | domain pause-003341 has defined IP address 192.168.83.157 and MAC address 52:54:00:04:1e:ab in network mk-pause-003341
	I0917 01:08:40.894876  183156 main.go:141] libmachine: (pause-003341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:1e:ab", ip: ""} in network mk-pause-003341: {Iface:virbr5 ExpiryTime:2025-09-17 02:06:57 +0000 UTC Type:0 Mac:52:54:00:04:1e:ab Iaid: IPaddr:192.168.83.157 Prefix:24 Hostname:pause-003341 Clientid:01:52:54:00:04:1e:ab}
	I0917 01:08:40.894899  183156 main.go:141] libmachine: (pause-003341) DBG | domain pause-003341 has defined IP address 192.168.83.157 and MAC address 52:54:00:04:1e:ab in network mk-pause-003341
	I0917 01:08:40.895080  183156 main.go:141] libmachine: (pause-003341) Calling .GetSSHPort
	I0917 01:08:40.895108  183156 main.go:141] libmachine: (pause-003341) Calling .GetSSHPort
	I0917 01:08:40.895281  183156 main.go:141] libmachine: (pause-003341) Calling .GetSSHKeyPath
	I0917 01:08:40.895332  183156 main.go:141] libmachine: (pause-003341) Calling .GetSSHKeyPath
	I0917 01:08:40.895424  183156 main.go:141] libmachine: (pause-003341) Calling .GetSSHUsername
	I0917 01:08:40.895538  183156 main.go:141] libmachine: (pause-003341) Calling .GetSSHUsername
	I0917 01:08:40.895572  183156 sshutil.go:53] new ssh client: &{IP:192.168.83.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21550-141589/.minikube/machines/pause-003341/id_rsa Username:docker}
	I0917 01:08:40.895690  183156 sshutil.go:53] new ssh client: &{IP:192.168.83.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21550-141589/.minikube/machines/pause-003341/id_rsa Username:docker}
	I0917 01:08:41.109088  183156 ssh_runner.go:195] Run: systemctl --version
	I0917 01:08:41.133124  183156 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 01:08:41.435591  183156 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0917 01:08:41.450887  183156 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0917 01:08:41.450969  183156 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 01:08:41.482913  183156 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0917 01:08:41.482946  183156 start.go:495] detecting cgroup driver to use...
	I0917 01:08:41.483006  183156 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 01:08:41.530079  183156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 01:08:41.569160  183156 docker.go:218] disabling cri-docker service (if available) ...
	I0917 01:08:41.569253  183156 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 01:08:41.622718  183156 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 01:08:41.674116  183156 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 01:08:42.117013  183156 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 01:08:42.566922  183156 docker.go:234] disabling docker service ...
	I0917 01:08:42.567004  183156 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 01:08:42.648647  183156 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 01:08:42.687227  183156 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 01:08:43.069448  183156 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 01:08:43.469154  183156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 01:08:43.502647  183156 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 01:08:43.568584  183156 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0917 01:08:43.568648  183156 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 01:08:43.606505  183156 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0917 01:08:43.606578  183156 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 01:08:43.631620  183156 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 01:08:43.658709  183156 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 01:08:43.679232  183156 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 01:08:43.696154  183156 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 01:08:43.719709  183156 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 01:08:43.748512  183156 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 01:08:43.767269  183156 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 01:08:43.781939  183156 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 01:08:43.805737  183156 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 01:08:44.132695  183156 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 01:08:54.147031  183156 ssh_runner.go:235] Completed: sudo systemctl restart crio: (10.014284649s)
	I0917 01:08:54.147078  183156 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 01:08:54.147148  183156 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 01:08:54.154131  183156 start.go:563] Will wait 60s for crictl version
	I0917 01:08:54.154209  183156 ssh_runner.go:195] Run: which crictl
	I0917 01:08:54.158907  183156 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 01:08:54.204558  183156 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0917 01:08:54.204660  183156 ssh_runner.go:195] Run: crio --version
	I0917 01:08:54.241987  183156 ssh_runner.go:195] Run: crio --version
	I0917 01:08:54.282453  183156 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.29.1 ...
	I0917 01:08:54.284171  183156 main.go:141] libmachine: (pause-003341) Calling .GetIP
	I0917 01:08:54.288298  183156 main.go:141] libmachine: (pause-003341) DBG | domain pause-003341 has defined MAC address 52:54:00:04:1e:ab in network mk-pause-003341
	I0917 01:08:54.288807  183156 main.go:141] libmachine: (pause-003341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:1e:ab", ip: ""} in network mk-pause-003341: {Iface:virbr5 ExpiryTime:2025-09-17 02:06:57 +0000 UTC Type:0 Mac:52:54:00:04:1e:ab Iaid: IPaddr:192.168.83.157 Prefix:24 Hostname:pause-003341 Clientid:01:52:54:00:04:1e:ab}
	I0917 01:08:54.288848  183156 main.go:141] libmachine: (pause-003341) DBG | domain pause-003341 has defined IP address 192.168.83.157 and MAC address 52:54:00:04:1e:ab in network mk-pause-003341
	I0917 01:08:54.289170  183156 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0917 01:08:54.294546  183156 kubeadm.go:875] updating cluster {Name:pause-003341 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.37.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:pause-003341 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.157 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-poli
cy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 01:08:54.294705  183156 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 01:08:54.294783  183156 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 01:08:54.361553  183156 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 01:08:54.361586  183156 crio.go:433] Images already preloaded, skipping extraction
	I0917 01:08:54.361662  183156 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 01:08:54.462503  183156 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 01:08:54.462540  183156 cache_images.go:85] Images are preloaded, skipping loading
	I0917 01:08:54.462554  183156 kubeadm.go:926] updating node { 192.168.83.157 8443 v1.34.0 crio true true} ...
	I0917 01:08:54.462706  183156 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-003341 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.157
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:pause-003341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 01:08:54.462803  183156 ssh_runner.go:195] Run: crio config
	I0917 01:08:54.572174  183156 cni.go:84] Creating CNI manager for ""
	I0917 01:08:54.572213  183156 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 01:08:54.572229  183156 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 01:08:54.572257  183156 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.157 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-003341 NodeName:pause-003341 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.157"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.157 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 01:08:54.572483  183156 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.157
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-003341"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.83.157"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.157"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 01:08:54.572576  183156 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0917 01:08:54.606909  183156 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 01:08:54.607000  183156 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 01:08:54.640454  183156 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0917 01:08:54.719372  183156 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 01:08:54.803598  183156 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I0917 01:08:54.872305  183156 ssh_runner.go:195] Run: grep 192.168.83.157	control-plane.minikube.internal$ /etc/hosts
	I0917 01:08:54.880955  183156 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 01:08:55.255494  183156 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 01:08:55.296324  183156 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/pause-003341 for IP: 192.168.83.157
	I0917 01:08:55.296352  183156 certs.go:194] generating shared ca certs ...
	I0917 01:08:55.296379  183156 certs.go:226] acquiring lock for ca certs: {Name:mk9185d5103eebb4e8c41dd45f840888861a3f37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:08:55.296599  183156 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-141589/.minikube/ca.key
	I0917 01:08:55.296665  183156 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-141589/.minikube/proxy-client-ca.key
	I0917 01:08:55.296684  183156 certs.go:256] generating profile certs ...
	I0917 01:08:55.296834  183156 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/pause-003341/client.key
	I0917 01:08:55.296952  183156 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/pause-003341/apiserver.key.7b3255fb
	I0917 01:08:55.297031  183156 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/pause-003341/proxy-client.key
	I0917 01:08:55.297209  183156 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-141589/.minikube/certs/145530.pem (1338 bytes)
	W0917 01:08:55.297269  183156 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-141589/.minikube/certs/145530_empty.pem, impossibly tiny 0 bytes
	I0917 01:08:55.297284  183156 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-141589/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 01:08:55.297318  183156 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-141589/.minikube/certs/ca.pem (1078 bytes)
	I0917 01:08:55.297355  183156 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-141589/.minikube/certs/cert.pem (1123 bytes)
	I0917 01:08:55.297387  183156 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-141589/.minikube/certs/key.pem (1675 bytes)
	I0917 01:08:55.297448  183156 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-141589/.minikube/files/etc/ssl/certs/1455302.pem (1708 bytes)
	I0917 01:08:55.298516  183156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-141589/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 01:08:55.395487  183156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-141589/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 01:08:55.496253  183156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-141589/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 01:08:55.588174  183156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-141589/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 01:08:55.728687  183156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/pause-003341/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0917 01:08:55.861409  183156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/pause-003341/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 01:08:55.935987  183156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/pause-003341/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 01:08:56.053610  183156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/pause-003341/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0917 01:08:56.131775  183156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-141589/.minikube/certs/145530.pem --> /usr/share/ca-certificates/145530.pem (1338 bytes)
	I0917 01:08:56.207839  183156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-141589/.minikube/files/etc/ssl/certs/1455302.pem --> /usr/share/ca-certificates/1455302.pem (1708 bytes)
	I0917 01:08:56.269583  183156 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-141589/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 01:08:56.327998  183156 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 01:08:56.363556  183156 ssh_runner.go:195] Run: openssl version
	I0917 01:08:56.378259  183156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145530.pem && ln -fs /usr/share/ca-certificates/145530.pem /etc/ssl/certs/145530.pem"
	I0917 01:08:56.402339  183156 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145530.pem
	I0917 01:08:56.410523  183156 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 00:07 /usr/share/ca-certificates/145530.pem
	I0917 01:08:56.410606  183156 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145530.pem
	I0917 01:08:56.424577  183156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/145530.pem /etc/ssl/certs/51391683.0"
	I0917 01:08:56.452868  183156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1455302.pem && ln -fs /usr/share/ca-certificates/1455302.pem /etc/ssl/certs/1455302.pem"
	I0917 01:08:56.498507  183156 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1455302.pem
	I0917 01:08:56.511756  183156 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 00:07 /usr/share/ca-certificates/1455302.pem
	I0917 01:08:56.511842  183156 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1455302.pem
	I0917 01:08:56.528716  183156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1455302.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 01:08:56.549882  183156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 01:08:56.570658  183156 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 01:08:56.577407  183156 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:58 /usr/share/ca-certificates/minikubeCA.pem
	I0917 01:08:56.577521  183156 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 01:08:56.592575  183156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 01:08:56.612671  183156 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 01:08:56.622002  183156 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 01:08:56.633633  183156 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 01:08:56.646028  183156 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 01:08:56.657461  183156 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 01:08:56.669096  183156 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 01:08:56.680961  183156 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 01:08:56.698321  183156 kubeadm.go:392] StartCluster: {Name:pause-003341 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.37.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:pause-003341 Namespace:default APIServerHAVI
P: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.157 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:
false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 01:08:56.698525  183156 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0917 01:08:56.698647  183156 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 01:08:56.825841  183156 cri.go:89] found id: "12e1d4b09635725fe3cc3618f2b4f504d842e8fa45c3beb86351205891a16273"
	I0917 01:08:56.825891  183156 cri.go:89] found id: "ce8c60df3d44f099c4f0580568abfd3263339a7ed6902dd33ff338e4a1b2aaff"
	I0917 01:08:56.825897  183156 cri.go:89] found id: "e50dfd1f5829bf9fe520d625940dafacd9399aa820576b5ca8cd609c37b57203"
	I0917 01:08:56.825907  183156 cri.go:89] found id: "333555e32dc3f31b5f185685584e3326582004dd3c02757ed5391fcbd05013a5"
	I0917 01:08:56.825911  183156 cri.go:89] found id: "4aa7dd95863c1e7ffdc2589551415495541f8d5d4e89f4ee106a8a1b1072693d"
	I0917 01:08:56.825916  183156 cri.go:89] found id: "12c017f30b706451dfedd28d44162a42e54b53c998b4d03a79a2e78f229bc8c3"
	I0917 01:08:56.825920  183156 cri.go:89] found id: "d16f9589afc868582e5ab79c8c003f0e4691d8888069b67a1306fa7b5945e4f7"
	I0917 01:08:56.825924  183156 cri.go:89] found id: "c865503b34184bb483f3e1371762996212a1e2653837d359b0f891f6221b426d"
	I0917 01:08:56.825927  183156 cri.go:89] found id: "83dfd35244c6e72a161ef5e7218c42658929bf3bea2a2376067e66a580c34351"
	I0917 01:08:56.825936  183156 cri.go:89] found id: "94d17330236e8e2a6cd1c3a3cc11d0c3630ead898a2a701030c3dfac73360d2d"
	I0917 01:08:56.825941  183156 cri.go:89] found id: "448188c9ac4290cc1e16dae3041c086b2d4dc1b3695789856ce3da14eae2a83b"
	I0917 01:08:56.825944  183156 cri.go:89] found id: ""
	I0917 01:08:56.826010  183156 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-003341 -n pause-003341
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-003341 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-003341 logs -n 25: (1.559033651s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                ARGS                                                                                │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-733841 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                              │ cilium-733841             │ jenkins │ v1.37.0 │ 17 Sep 25 01:08 UTC │                     │
	│ ssh     │ -p cilium-733841 sudo cat /etc/kubernetes/kubelet.conf                                                                                                             │ cilium-733841             │ jenkins │ v1.37.0 │ 17 Sep 25 01:08 UTC │                     │
	│ ssh     │ -p cilium-733841 sudo cat /var/lib/kubelet/config.yaml                                                                                                             │ cilium-733841             │ jenkins │ v1.37.0 │ 17 Sep 25 01:08 UTC │                     │
	│ ssh     │ -p cilium-733841 sudo systemctl status docker --all --full --no-pager                                                                                              │ cilium-733841             │ jenkins │ v1.37.0 │ 17 Sep 25 01:08 UTC │                     │
	│ ssh     │ -p cilium-733841 sudo systemctl cat docker --no-pager                                                                                                              │ cilium-733841             │ jenkins │ v1.37.0 │ 17 Sep 25 01:08 UTC │                     │
	│ ssh     │ -p cilium-733841 sudo cat /etc/docker/daemon.json                                                                                                                  │ cilium-733841             │ jenkins │ v1.37.0 │ 17 Sep 25 01:08 UTC │                     │
	│ ssh     │ -p cilium-733841 sudo docker system info                                                                                                                           │ cilium-733841             │ jenkins │ v1.37.0 │ 17 Sep 25 01:08 UTC │                     │
	│ ssh     │ -p cilium-733841 sudo systemctl status cri-docker --all --full --no-pager                                                                                          │ cilium-733841             │ jenkins │ v1.37.0 │ 17 Sep 25 01:08 UTC │                     │
	│ ssh     │ -p cilium-733841 sudo systemctl cat cri-docker --no-pager                                                                                                          │ cilium-733841             │ jenkins │ v1.37.0 │ 17 Sep 25 01:08 UTC │                     │
	│ ssh     │ -p cilium-733841 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                     │ cilium-733841             │ jenkins │ v1.37.0 │ 17 Sep 25 01:08 UTC │                     │
	│ ssh     │ -p cilium-733841 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                               │ cilium-733841             │ jenkins │ v1.37.0 │ 17 Sep 25 01:08 UTC │                     │
	│ ssh     │ -p cilium-733841 sudo cri-dockerd --version                                                                                                                        │ cilium-733841             │ jenkins │ v1.37.0 │ 17 Sep 25 01:08 UTC │                     │
	│ ssh     │ -p cilium-733841 sudo systemctl status containerd --all --full --no-pager                                                                                          │ cilium-733841             │ jenkins │ v1.37.0 │ 17 Sep 25 01:08 UTC │                     │
	│ ssh     │ -p cilium-733841 sudo systemctl cat containerd --no-pager                                                                                                          │ cilium-733841             │ jenkins │ v1.37.0 │ 17 Sep 25 01:08 UTC │                     │
	│ ssh     │ -p cilium-733841 sudo cat /lib/systemd/system/containerd.service                                                                                                   │ cilium-733841             │ jenkins │ v1.37.0 │ 17 Sep 25 01:08 UTC │                     │
	│ ssh     │ -p cilium-733841 sudo cat /etc/containerd/config.toml                                                                                                              │ cilium-733841             │ jenkins │ v1.37.0 │ 17 Sep 25 01:08 UTC │                     │
	│ ssh     │ -p cilium-733841 sudo containerd config dump                                                                                                                       │ cilium-733841             │ jenkins │ v1.37.0 │ 17 Sep 25 01:08 UTC │                     │
	│ ssh     │ -p cilium-733841 sudo systemctl status crio --all --full --no-pager                                                                                                │ cilium-733841             │ jenkins │ v1.37.0 │ 17 Sep 25 01:08 UTC │                     │
	│ ssh     │ -p cilium-733841 sudo systemctl cat crio --no-pager                                                                                                                │ cilium-733841             │ jenkins │ v1.37.0 │ 17 Sep 25 01:08 UTC │                     │
	│ ssh     │ -p cilium-733841 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                      │ cilium-733841             │ jenkins │ v1.37.0 │ 17 Sep 25 01:08 UTC │                     │
	│ ssh     │ -p cilium-733841 sudo crio config                                                                                                                                  │ cilium-733841             │ jenkins │ v1.37.0 │ 17 Sep 25 01:08 UTC │                     │
	│ delete  │ -p cilium-733841                                                                                                                                                   │ cilium-733841             │ jenkins │ v1.37.0 │ 17 Sep 25 01:08 UTC │ 17 Sep 25 01:08 UTC │
	│ start   │ -p stopped-upgrade-369624 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                     │ stopped-upgrade-369624    │ jenkins │ v1.32.0 │ 17 Sep 25 01:08 UTC │                     │
	│ stop    │ -p kubernetes-upgrade-661366                                                                                                                                       │ kubernetes-upgrade-661366 │ jenkins │ v1.37.0 │ 17 Sep 25 01:09 UTC │ 17 Sep 25 01:09 UTC │
	│ start   │ -p kubernetes-upgrade-661366 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ kubernetes-upgrade-661366 │ jenkins │ v1.37.0 │ 17 Sep 25 01:09 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/17 01:09:31
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 01:09:31.232102  186335 out.go:360] Setting OutFile to fd 1 ...
	I0917 01:09:31.232390  186335 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 01:09:31.232402  186335 out.go:374] Setting ErrFile to fd 2...
	I0917 01:09:31.232410  186335 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 01:09:31.232644  186335 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-141589/.minikube/bin
	I0917 01:09:31.233309  186335 out.go:368] Setting JSON to false
	I0917 01:09:31.234504  186335 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":13915,"bootTime":1758057456,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 01:09:31.234622  186335 start.go:140] virtualization: kvm guest
	I0917 01:09:31.236824  186335 out.go:179] * [kubernetes-upgrade-661366] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0917 01:09:31.238867  186335 out.go:179]   - MINIKUBE_LOCATION=21550
	I0917 01:09:31.238872  186335 notify.go:220] Checking for updates...
	I0917 01:09:31.240723  186335 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 01:09:31.242239  186335 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-141589/kubeconfig
	I0917 01:09:31.243657  186335 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-141589/.minikube
	I0917 01:09:31.245129  186335 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 01:09:31.246475  186335 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 01:09:31.248623  186335 config.go:182] Loaded profile config "kubernetes-upgrade-661366": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I0917 01:09:31.249302  186335 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 01:09:31.249403  186335 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 01:09:31.268918  186335 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41789
	I0917 01:09:31.269629  186335 main.go:141] libmachine: () Calling .GetVersion
	I0917 01:09:31.270300  186335 main.go:141] libmachine: Using API Version  1
	I0917 01:09:31.270340  186335 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 01:09:31.271120  186335 main.go:141] libmachine: () Calling .GetMachineName
	I0917 01:09:31.271347  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) Calling .DriverName
	I0917 01:09:31.271765  186335 driver.go:421] Setting default libvirt URI to qemu:///system
	I0917 01:09:31.272396  186335 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 01:09:31.272458  186335 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 01:09:31.294507  186335 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40861
	I0917 01:09:31.295021  186335 main.go:141] libmachine: () Calling .GetVersion
	I0917 01:09:31.295700  186335 main.go:141] libmachine: Using API Version  1
	I0917 01:09:31.295750  186335 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 01:09:31.296314  186335 main.go:141] libmachine: () Calling .GetMachineName
	I0917 01:09:31.296603  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) Calling .DriverName
	I0917 01:09:31.346483  186335 out.go:179] * Using the kvm2 driver based on existing profile
	I0917 01:09:31.347705  186335 start.go:304] selected driver: kvm2
	I0917 01:09:31.347728  186335 start.go:918] validating driver "kvm2" against &{Name:kubernetes-upgrade-661366 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.37.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-up
grade-661366 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.189 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:
false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 01:09:31.347885  186335 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 01:09:31.348790  186335 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 01:09:31.348910  186335 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21550-141589/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0917 01:09:31.364832  186335 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0917 01:09:31.365319  186335 cni.go:84] Creating CNI manager for ""
	I0917 01:09:31.365383  186335 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 01:09:31.365419  186335 start.go:348] cluster config:
	{Name:kubernetes-upgrade-661366 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.37.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:kubernetes-upgrade-661366 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.189 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: S
ocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 01:09:31.365528  186335 iso.go:125] acquiring lock: {Name:mkbc497934aeda3bf1eaa3e96176da91d2f10b30 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 01:09:31.367478  186335 out.go:179] * Starting "kubernetes-upgrade-661366" primary control-plane node in "kubernetes-upgrade-661366" cluster
	I0917 01:09:31.368619  186335 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 01:09:31.368666  186335 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21550-141589/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0917 01:09:31.368678  186335 cache.go:58] Caching tarball of preloaded images
	I0917 01:09:31.368792  186335 preload.go:172] Found /home/jenkins/minikube-integration/21550-141589/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 01:09:31.368809  186335 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0917 01:09:31.368937  186335 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/kubernetes-upgrade-661366/config.json ...
	I0917 01:09:31.369144  186335 start.go:360] acquireMachinesLock for kubernetes-upgrade-661366: {Name:mk4898504d31cc722a10b1787754ef8ecd27d0ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 01:09:31.369213  186335 start.go:364] duration metric: took 30.56µs to acquireMachinesLock for "kubernetes-upgrade-661366"
	I0917 01:09:31.369233  186335 start.go:96] Skipping create...Using existing machine configuration
	I0917 01:09:31.369240  186335 fix.go:54] fixHost starting: 
	I0917 01:09:31.369641  186335 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 01:09:31.369694  186335 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 01:09:31.383680  186335 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44685
	I0917 01:09:31.384203  186335 main.go:141] libmachine: () Calling .GetVersion
	I0917 01:09:31.384700  186335 main.go:141] libmachine: Using API Version  1
	I0917 01:09:31.384724  186335 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 01:09:31.385166  186335 main.go:141] libmachine: () Calling .GetMachineName
	I0917 01:09:31.385444  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) Calling .DriverName
	I0917 01:09:31.385616  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) Calling .GetState
	I0917 01:09:31.387649  186335 fix.go:112] recreateIfNeeded on kubernetes-upgrade-661366: state=Stopped err=<nil>
	I0917 01:09:31.387689  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) Calling .DriverName
	W0917 01:09:31.387915  186335 fix.go:138] unexpected machine state, will restart: <nil>
	W0917 01:09:29.551984  183156 pod_ready.go:104] pod "etcd-pause-003341" is not "Ready", error: <nil>
	W0917 01:09:32.053393  183156 pod_ready.go:104] pod "etcd-pause-003341" is not "Ready", error: <nil>
	I0917 01:09:29.660141  185907 main.go:141] libmachine: (stopped-upgrade-369624) Calling .GetIP
	I0917 01:09:29.663629  185907 main.go:141] libmachine: (stopped-upgrade-369624) DBG | domain stopped-upgrade-369624 has defined MAC address 52:54:00:5d:06:34 in network mk-stopped-upgrade-369624
	I0917 01:09:29.663988  185907 main.go:141] libmachine: (stopped-upgrade-369624) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:06:34", ip: ""} in network mk-stopped-upgrade-369624: {Iface:virbr4 ExpiryTime:2025-09-17 02:09:22 +0000 UTC Type:0 Mac:52:54:00:5d:06:34 Iaid: IPaddr:192.168.61.95 Prefix:24 Hostname:stopped-upgrade-369624 Clientid:01:52:54:00:5d:06:34}
	I0917 01:09:29.664015  185907 main.go:141] libmachine: (stopped-upgrade-369624) DBG | domain stopped-upgrade-369624 has defined IP address 192.168.61.95 and MAC address 52:54:00:5d:06:34 in network mk-stopped-upgrade-369624
	I0917 01:09:29.664404  185907 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0917 01:09:29.669134  185907 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 01:09:29.682319  185907 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I0917 01:09:29.682371  185907 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 01:09:29.722336  185907 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I0917 01:09:29.722415  185907 ssh_runner.go:195] Run: which lz4
	I0917 01:09:29.726565  185907 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0917 01:09:29.730766  185907 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0917 01:09:29.730799  185907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-141589/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I0917 01:09:31.353694  185907 crio.go:444] Took 1.627184 seconds to copy over tarball
	I0917 01:09:31.353750  185907 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0917 01:09:31.390027  186335 out.go:252] * Restarting existing kvm2 VM for "kubernetes-upgrade-661366" ...
	I0917 01:09:31.390063  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) Calling .Start
	I0917 01:09:31.390291  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) starting domain...
	I0917 01:09:31.390318  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) ensuring networks are active...
	I0917 01:09:31.391451  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) Ensuring network default is active
	I0917 01:09:31.392015  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) Ensuring network mk-kubernetes-upgrade-661366 is active
	I0917 01:09:31.392577  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) getting domain XML...
	I0917 01:09:31.394203  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG | starting domain XML:
	I0917 01:09:31.394225  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG | <domain type='kvm'>
	I0917 01:09:31.394238  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |   <name>kubernetes-upgrade-661366</name>
	I0917 01:09:31.394253  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |   <uuid>9775ae9b-3a7a-4285-882d-c3410731e728</uuid>
	I0917 01:09:31.394263  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |   <memory unit='KiB'>3145728</memory>
	I0917 01:09:31.394275  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |   <currentMemory unit='KiB'>3145728</currentMemory>
	I0917 01:09:31.394284  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |   <vcpu placement='static'>2</vcpu>
	I0917 01:09:31.394294  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |   <os>
	I0917 01:09:31.394305  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I0917 01:09:31.394315  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |     <boot dev='cdrom'/>
	I0917 01:09:31.394324  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |     <boot dev='hd'/>
	I0917 01:09:31.394335  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |     <bootmenu enable='no'/>
	I0917 01:09:31.394366  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |   </os>
	I0917 01:09:31.394411  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |   <features>
	I0917 01:09:31.394427  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |     <acpi/>
	I0917 01:09:31.394434  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |     <apic/>
	I0917 01:09:31.394446  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |     <pae/>
	I0917 01:09:31.394454  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |   </features>
	I0917 01:09:31.394469  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I0917 01:09:31.394494  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |   <clock offset='utc'/>
	I0917 01:09:31.394500  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |   <on_poweroff>destroy</on_poweroff>
	I0917 01:09:31.394508  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |   <on_reboot>restart</on_reboot>
	I0917 01:09:31.394513  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |   <on_crash>destroy</on_crash>
	I0917 01:09:31.394544  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |   <devices>
	I0917 01:09:31.394565  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I0917 01:09:31.394574  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |     <disk type='file' device='cdrom'>
	I0917 01:09:31.394583  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |       <driver name='qemu' type='raw'/>
	I0917 01:09:31.394602  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |       <source file='/home/jenkins/minikube-integration/21550-141589/.minikube/machines/kubernetes-upgrade-661366/boot2docker.iso'/>
	I0917 01:09:31.394611  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |       <target dev='hdc' bus='scsi'/>
	I0917 01:09:31.394620  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |       <readonly/>
	I0917 01:09:31.394634  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I0917 01:09:31.394645  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |     </disk>
	I0917 01:09:31.394656  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |     <disk type='file' device='disk'>
	I0917 01:09:31.394668  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I0917 01:09:31.394684  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |       <source file='/home/jenkins/minikube-integration/21550-141589/.minikube/machines/kubernetes-upgrade-661366/kubernetes-upgrade-661366.rawdisk'/>
	I0917 01:09:31.394697  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |       <target dev='hda' bus='virtio'/>
	I0917 01:09:31.394710  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I0917 01:09:31.394721  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |     </disk>
	I0917 01:09:31.394730  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I0917 01:09:31.394747  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I0917 01:09:31.394757  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |     </controller>
	I0917 01:09:31.394767  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I0917 01:09:31.394779  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I0917 01:09:31.394791  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I0917 01:09:31.394811  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |     </controller>
	I0917 01:09:31.394823  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |     <interface type='network'>
	I0917 01:09:31.394835  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |       <mac address='52:54:00:53:b6:e4'/>
	I0917 01:09:31.394848  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |       <source network='mk-kubernetes-upgrade-661366'/>
	I0917 01:09:31.394872  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |       <model type='virtio'/>
	I0917 01:09:31.394889  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I0917 01:09:31.394900  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |     </interface>
	I0917 01:09:31.394909  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |     <interface type='network'>
	I0917 01:09:31.394920  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |       <mac address='52:54:00:73:5e:5e'/>
	I0917 01:09:31.394932  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |       <source network='default'/>
	I0917 01:09:31.394942  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |       <model type='virtio'/>
	I0917 01:09:31.394956  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I0917 01:09:31.394967  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |     </interface>
	I0917 01:09:31.394976  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |     <serial type='pty'>
	I0917 01:09:31.394988  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |       <target type='isa-serial' port='0'>
	I0917 01:09:31.394997  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |         <model name='isa-serial'/>
	I0917 01:09:31.395015  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |       </target>
	I0917 01:09:31.395027  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |     </serial>
	I0917 01:09:31.395034  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |     <console type='pty'>
	I0917 01:09:31.395093  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |       <target type='serial' port='0'/>
	I0917 01:09:31.395121  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |     </console>
	I0917 01:09:31.395139  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |     <input type='mouse' bus='ps2'/>
	I0917 01:09:31.395158  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |     <input type='keyboard' bus='ps2'/>
	I0917 01:09:31.395170  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |     <audio id='1' type='none'/>
	I0917 01:09:31.395180  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |     <memballoon model='virtio'>
	I0917 01:09:31.395192  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I0917 01:09:31.395203  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |     </memballoon>
	I0917 01:09:31.395213  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |     <rng model='virtio'>
	I0917 01:09:31.395222  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |       <backend model='random'>/dev/random</backend>
	I0917 01:09:31.395233  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I0917 01:09:31.395241  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |     </rng>
	I0917 01:09:31.395257  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |   </devices>
	I0917 01:09:31.395272  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG | </domain>
	I0917 01:09:31.395284  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG | 
	I0917 01:09:32.944733  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) waiting for domain to start...
	I0917 01:09:32.946437  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) domain is now running
	I0917 01:09:32.946477  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) waiting for IP...
	I0917 01:09:32.947714  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG | domain kubernetes-upgrade-661366 has defined MAC address 52:54:00:53:b6:e4 in network mk-kubernetes-upgrade-661366
	I0917 01:09:32.948498  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) found domain IP: 192.168.50.189
	I0917 01:09:32.948538  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) reserving static IP address...
	I0917 01:09:32.948578  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG | domain kubernetes-upgrade-661366 has current primary IP address 192.168.50.189 and MAC address 52:54:00:53:b6:e4 in network mk-kubernetes-upgrade-661366
	I0917 01:09:32.949081  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG | found host DHCP lease matching {name: "kubernetes-upgrade-661366", mac: "52:54:00:53:b6:e4", ip: "192.168.50.189"} in network mk-kubernetes-upgrade-661366: {Iface:virbr2 ExpiryTime:2025-09-17 02:09:01 +0000 UTC Type:0 Mac:52:54:00:53:b6:e4 Iaid: IPaddr:192.168.50.189 Prefix:24 Hostname:kubernetes-upgrade-661366 Clientid:01:52:54:00:53:b6:e4}
	I0917 01:09:32.949121  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) reserved static IP address 192.168.50.189 for domain kubernetes-upgrade-661366
	I0917 01:09:32.949142  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG | skip adding static IP to network mk-kubernetes-upgrade-661366 - found existing host DHCP lease matching {name: "kubernetes-upgrade-661366", mac: "52:54:00:53:b6:e4", ip: "192.168.50.189"}
	I0917 01:09:32.949157  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG | Getting to WaitForSSH function...
	I0917 01:09:32.949171  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) waiting for SSH...
	I0917 01:09:32.952313  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG | domain kubernetes-upgrade-661366 has defined MAC address 52:54:00:53:b6:e4 in network mk-kubernetes-upgrade-661366
	I0917 01:09:32.952890  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:b6:e4", ip: ""} in network mk-kubernetes-upgrade-661366: {Iface:virbr2 ExpiryTime:2025-09-17 02:09:01 +0000 UTC Type:0 Mac:52:54:00:53:b6:e4 Iaid: IPaddr:192.168.50.189 Prefix:24 Hostname:kubernetes-upgrade-661366 Clientid:01:52:54:00:53:b6:e4}
	I0917 01:09:32.952922  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG | domain kubernetes-upgrade-661366 has defined IP address 192.168.50.189 and MAC address 52:54:00:53:b6:e4 in network mk-kubernetes-upgrade-661366
	I0917 01:09:32.953177  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG | Using SSH client type: external
	I0917 01:09:32.953214  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG | Using SSH private key: /home/jenkins/minikube-integration/21550-141589/.minikube/machines/kubernetes-upgrade-661366/id_rsa (-rw-------)
	I0917 01:09:32.953258  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.189 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21550-141589/.minikube/machines/kubernetes-upgrade-661366/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0917 01:09:32.953278  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG | About to run SSH command:
	I0917 01:09:32.953306  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG | exit 0
	W0917 01:09:34.055264  183156 pod_ready.go:104] pod "etcd-pause-003341" is not "Ready", error: <nil>
	W0917 01:09:36.550883  183156 pod_ready.go:104] pod "etcd-pause-003341" is not "Ready", error: <nil>
	I0917 01:09:37.050222  183156 pod_ready.go:94] pod "etcd-pause-003341" is "Ready"
	I0917 01:09:37.050260  183156 pod_ready.go:86] duration metric: took 11.507079689s for pod "etcd-pause-003341" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:09:37.053670  183156 pod_ready.go:83] waiting for pod "kube-apiserver-pause-003341" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:09:37.058900  183156 pod_ready.go:94] pod "kube-apiserver-pause-003341" is "Ready"
	I0917 01:09:37.058930  183156 pod_ready.go:86] duration metric: took 5.229317ms for pod "kube-apiserver-pause-003341" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:09:37.062409  183156 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-003341" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:09:37.068791  183156 pod_ready.go:94] pod "kube-controller-manager-pause-003341" is "Ready"
	I0917 01:09:37.068824  183156 pod_ready.go:86] duration metric: took 6.384026ms for pod "kube-controller-manager-pause-003341" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:09:37.072569  183156 pod_ready.go:83] waiting for pod "kube-proxy-9xthx" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:09:37.247164  183156 pod_ready.go:94] pod "kube-proxy-9xthx" is "Ready"
	I0917 01:09:37.247201  183156 pod_ready.go:86] duration metric: took 174.603076ms for pod "kube-proxy-9xthx" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:09:37.447025  183156 pod_ready.go:83] waiting for pod "kube-scheduler-pause-003341" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:09:37.847451  183156 pod_ready.go:94] pod "kube-scheduler-pause-003341" is "Ready"
	I0917 01:09:37.847482  183156 pod_ready.go:86] duration metric: took 400.420287ms for pod "kube-scheduler-pause-003341" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:09:37.847494  183156 pod_ready.go:40] duration metric: took 12.321499231s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0917 01:09:37.900943  183156 start.go:617] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0917 01:09:37.905022  183156 out.go:179] * Done! kubectl is now configured to use "pause-003341" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 17 01:09:38 pause-003341 crio[3387]: time="2025-09-17 01:09:38.650654562Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758071378650632078,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1db41d68-649e-4f47-b680-294bf66a2a90 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 01:09:38 pause-003341 crio[3387]: time="2025-09-17 01:09:38.651559110Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1180cd8a-a0e6-4a0d-9fff-52194566b248 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 01:09:38 pause-003341 crio[3387]: time="2025-09-17 01:09:38.651637993Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1180cd8a-a0e6-4a0d-9fff-52194566b248 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 01:09:38 pause-003341 crio[3387]: time="2025-09-17 01:09:38.652059043Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:51db5cdd0370ce980a6bcc64bce3a15578324488f0f7aa3f2229b04bad55a942,PodSandboxId:bdf83652d8a1c8c16653e066bb82f130cfd92cd7dc83c0401ed5fac46f96a4c1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1758071364110984485,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9xthx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e1459c5-1696-4e03-a638-921f1e6c547c,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b63088e8180f123f7e5b2adbaaf66c2f5f9f2ecea0273096f52c46811a556f99,PodSandboxId:e055cde0b61b8fbef68da0c39cd1e454bceca7b21cefb3de2365ad3892ef6317,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1758071360511315171,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-003341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7b1508a99ce0ed02c62e794b4b7bc3d,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257
,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79a70955e4b7f57171a2ea9b731283569e648323092e7f15408d6fbca30d0385,PodSandboxId:60a80d420056fb10714ec60d9b65582ccd837d370130760a85cca15542b4cd2f,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1758071360470907558,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-003341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 527d74da00d6a6d61913ea63691d068d,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,
io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2208a1f1a4acb35cf3a0b4bc20c8fcce68d32033c91a0d3fce5bb5dfc70c1c9b,PodSandboxId:74c972177ae29bc7723333b045d32166a830cce8e9ade3e5025eec94fc71984c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1758071360493257425,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-003341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
255972a0efec9de7a8337349e9eb993,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db35cfba9ac2bd3a9b79076c9d536faff9fe56c07347c4203ee6d0811e556928,PodSandboxId:aac61577b13d6aa7fdb45bb1d59f836425006c73e771dce9c18298972c524a1c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1758071360478760761,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernete
s.pod.name: kube-apiserver-pause-003341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04d1e7d0337d35447b7033e62317a447,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5db129cfeb1cd59eb8fbb9db5803bfc99d8cd4002a319fb0931317d3ff6fc999,PodSandboxId:a2daacc4465fe0e66d882ae9c63fa060a7a612565c1d5ffa731330d297919c1c,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:175807
1356476984304,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-955n2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79aefa4e-3e77-4863-bc04-390dc327197b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce8c60df3d44f099c4f0580568abfd3263339a7ed6902dd33ff338e4a1b2aaff,PodSandboxId:74c972177ae29bc7723333b045d32166a830cce8e9ade3e
5025eec94fc71984c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1758071335307450231,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-003341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0255972a0efec9de7a8337349e9eb993,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12e1d4b09635725fe3cc3
618f2b4f504d842e8fa45c3beb86351205891a16273,PodSandboxId:60a80d420056fb10714ec60d9b65582ccd837d370130760a85cca15542b4cd2f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1758071335332100449,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-003341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 527d74da00d6a6d61913ea63691d068d,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernete
s.pod.terminationGracePeriod: 30,},},&Container{Id:333555e32dc3f31b5f185685584e3326582004dd3c02757ed5391fcbd05013a5,PodSandboxId:e055cde0b61b8fbef68da0c39cd1e454bceca7b21cefb3de2365ad3892ef6317,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1758071334986881610,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-003341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7b1508a99ce0ed02c62e794b4b7bc3d,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.ku
bernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e50dfd1f5829bf9fe520d625940dafacd9399aa820576b5ca8cd609c37b57203,PodSandboxId:bdf83652d8a1c8c16653e066bb82f130cfd92cd7dc83c0401ed5fac46f96a4c1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1758071334996787963,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9xthx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e1459c5-1696-4e03-a638-921f1e6c547c,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4aa7dd95863c1e7ffdc2589551415495541f8d5d4e89f4ee106a8a1b1072693d,PodSandboxId:aac61577b13d6aa7fdb45bb1d59f836425006c73e771dce9c18298972c524a1c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_EXITED,CreatedAt:1758071334877583521,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-003341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04d1e7d0337d35447b7033e62317a447,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\
"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12c017f30b706451dfedd28d44162a42e54b53c998b4d03a79a2e78f229bc8c3,PodSandboxId:0afad675e99000cb048506aa1762e470d0286e6ad6c00520e44d7fda411ebdb4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1758071322826330209,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-955n2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79aefa4e-3e77-4863-bc04-390dc327197b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernet
es.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1180cd8a-a0e6-4a0d-9fff-52194566b248 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 01:09:38 pause-003341 crio[3387]: time="2025-09-17 01:09:38.710331452Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6fa22109-4e0a-4ca5-bd9a-82b324d2c786 name=/runtime.v1.RuntimeService/Version
	Sep 17 01:09:38 pause-003341 crio[3387]: time="2025-09-17 01:09:38.710448759Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6fa22109-4e0a-4ca5-bd9a-82b324d2c786 name=/runtime.v1.RuntimeService/Version
	Sep 17 01:09:38 pause-003341 crio[3387]: time="2025-09-17 01:09:38.713194110Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2ddb98ce-8d92-4fcd-afa8-6ae4642f3667 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 01:09:38 pause-003341 crio[3387]: time="2025-09-17 01:09:38.714073475Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758071378714041497,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2ddb98ce-8d92-4fcd-afa8-6ae4642f3667 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 01:09:38 pause-003341 crio[3387]: time="2025-09-17 01:09:38.715048180Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=146d0f76-bb05-4abc-9123-8b048715847c name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 01:09:38 pause-003341 crio[3387]: time="2025-09-17 01:09:38.715184088Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=146d0f76-bb05-4abc-9123-8b048715847c name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 01:09:38 pause-003341 crio[3387]: time="2025-09-17 01:09:38.715618342Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:51db5cdd0370ce980a6bcc64bce3a15578324488f0f7aa3f2229b04bad55a942,PodSandboxId:bdf83652d8a1c8c16653e066bb82f130cfd92cd7dc83c0401ed5fac46f96a4c1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1758071364110984485,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9xthx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e1459c5-1696-4e03-a638-921f1e6c547c,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b63088e8180f123f7e5b2adbaaf66c2f5f9f2ecea0273096f52c46811a556f99,PodSandboxId:e055cde0b61b8fbef68da0c39cd1e454bceca7b21cefb3de2365ad3892ef6317,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1758071360511315171,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-003341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7b1508a99ce0ed02c62e794b4b7bc3d,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257
,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79a70955e4b7f57171a2ea9b731283569e648323092e7f15408d6fbca30d0385,PodSandboxId:60a80d420056fb10714ec60d9b65582ccd837d370130760a85cca15542b4cd2f,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1758071360470907558,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-003341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 527d74da00d6a6d61913ea63691d068d,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,
io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2208a1f1a4acb35cf3a0b4bc20c8fcce68d32033c91a0d3fce5bb5dfc70c1c9b,PodSandboxId:74c972177ae29bc7723333b045d32166a830cce8e9ade3e5025eec94fc71984c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1758071360493257425,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-003341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
255972a0efec9de7a8337349e9eb993,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db35cfba9ac2bd3a9b79076c9d536faff9fe56c07347c4203ee6d0811e556928,PodSandboxId:aac61577b13d6aa7fdb45bb1d59f836425006c73e771dce9c18298972c524a1c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1758071360478760761,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernete
s.pod.name: kube-apiserver-pause-003341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04d1e7d0337d35447b7033e62317a447,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5db129cfeb1cd59eb8fbb9db5803bfc99d8cd4002a319fb0931317d3ff6fc999,PodSandboxId:a2daacc4465fe0e66d882ae9c63fa060a7a612565c1d5ffa731330d297919c1c,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:175807
1356476984304,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-955n2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79aefa4e-3e77-4863-bc04-390dc327197b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce8c60df3d44f099c4f0580568abfd3263339a7ed6902dd33ff338e4a1b2aaff,PodSandboxId:74c972177ae29bc7723333b045d32166a830cce8e9ade3e
5025eec94fc71984c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1758071335307450231,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-003341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0255972a0efec9de7a8337349e9eb993,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12e1d4b09635725fe3cc3
618f2b4f504d842e8fa45c3beb86351205891a16273,PodSandboxId:60a80d420056fb10714ec60d9b65582ccd837d370130760a85cca15542b4cd2f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1758071335332100449,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-003341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 527d74da00d6a6d61913ea63691d068d,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernete
s.pod.terminationGracePeriod: 30,},},&Container{Id:333555e32dc3f31b5f185685584e3326582004dd3c02757ed5391fcbd05013a5,PodSandboxId:e055cde0b61b8fbef68da0c39cd1e454bceca7b21cefb3de2365ad3892ef6317,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1758071334986881610,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-003341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7b1508a99ce0ed02c62e794b4b7bc3d,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.ku
bernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e50dfd1f5829bf9fe520d625940dafacd9399aa820576b5ca8cd609c37b57203,PodSandboxId:bdf83652d8a1c8c16653e066bb82f130cfd92cd7dc83c0401ed5fac46f96a4c1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1758071334996787963,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9xthx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e1459c5-1696-4e03-a638-921f1e6c547c,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4aa7dd95863c1e7ffdc2589551415495541f8d5d4e89f4ee106a8a1b1072693d,PodSandboxId:aac61577b13d6aa7fdb45bb1d59f836425006c73e771dce9c18298972c524a1c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_EXITED,CreatedAt:1758071334877583521,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-003341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04d1e7d0337d35447b7033e62317a447,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\
"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12c017f30b706451dfedd28d44162a42e54b53c998b4d03a79a2e78f229bc8c3,PodSandboxId:0afad675e99000cb048506aa1762e470d0286e6ad6c00520e44d7fda411ebdb4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1758071322826330209,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-955n2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79aefa4e-3e77-4863-bc04-390dc327197b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernet
es.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=146d0f76-bb05-4abc-9123-8b048715847c name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 01:09:38 pause-003341 crio[3387]: time="2025-09-17 01:09:38.774593605Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9fac53a9-436d-4fcd-a8f7-1f423c3ace43 name=/runtime.v1.RuntimeService/Version
	Sep 17 01:09:38 pause-003341 crio[3387]: time="2025-09-17 01:09:38.774703256Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9fac53a9-436d-4fcd-a8f7-1f423c3ace43 name=/runtime.v1.RuntimeService/Version
	Sep 17 01:09:38 pause-003341 crio[3387]: time="2025-09-17 01:09:38.776413591Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f23fd737-b9da-4448-af21-804074a016b5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 01:09:38 pause-003341 crio[3387]: time="2025-09-17 01:09:38.777084223Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758071378777050084,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f23fd737-b9da-4448-af21-804074a016b5 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 01:09:38 pause-003341 crio[3387]: time="2025-09-17 01:09:38.777716475Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3757b9b1-9526-431c-a360-b267918ce2c2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 01:09:38 pause-003341 crio[3387]: time="2025-09-17 01:09:38.777768723Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3757b9b1-9526-431c-a360-b267918ce2c2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 01:09:38 pause-003341 crio[3387]: time="2025-09-17 01:09:38.778137692Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:51db5cdd0370ce980a6bcc64bce3a15578324488f0f7aa3f2229b04bad55a942,PodSandboxId:bdf83652d8a1c8c16653e066bb82f130cfd92cd7dc83c0401ed5fac46f96a4c1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1758071364110984485,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9xthx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e1459c5-1696-4e03-a638-921f1e6c547c,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b63088e8180f123f7e5b2adbaaf66c2f5f9f2ecea0273096f52c46811a556f99,PodSandboxId:e055cde0b61b8fbef68da0c39cd1e454bceca7b21cefb3de2365ad3892ef6317,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1758071360511315171,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-003341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7b1508a99ce0ed02c62e794b4b7bc3d,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257
,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79a70955e4b7f57171a2ea9b731283569e648323092e7f15408d6fbca30d0385,PodSandboxId:60a80d420056fb10714ec60d9b65582ccd837d370130760a85cca15542b4cd2f,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1758071360470907558,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-003341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 527d74da00d6a6d61913ea63691d068d,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,
io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2208a1f1a4acb35cf3a0b4bc20c8fcce68d32033c91a0d3fce5bb5dfc70c1c9b,PodSandboxId:74c972177ae29bc7723333b045d32166a830cce8e9ade3e5025eec94fc71984c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1758071360493257425,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-003341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
255972a0efec9de7a8337349e9eb993,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db35cfba9ac2bd3a9b79076c9d536faff9fe56c07347c4203ee6d0811e556928,PodSandboxId:aac61577b13d6aa7fdb45bb1d59f836425006c73e771dce9c18298972c524a1c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1758071360478760761,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernete
s.pod.name: kube-apiserver-pause-003341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04d1e7d0337d35447b7033e62317a447,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5db129cfeb1cd59eb8fbb9db5803bfc99d8cd4002a319fb0931317d3ff6fc999,PodSandboxId:a2daacc4465fe0e66d882ae9c63fa060a7a612565c1d5ffa731330d297919c1c,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:175807
1356476984304,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-955n2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79aefa4e-3e77-4863-bc04-390dc327197b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce8c60df3d44f099c4f0580568abfd3263339a7ed6902dd33ff338e4a1b2aaff,PodSandboxId:74c972177ae29bc7723333b045d32166a830cce8e9ade3e
5025eec94fc71984c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1758071335307450231,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-003341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0255972a0efec9de7a8337349e9eb993,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12e1d4b09635725fe3cc3
618f2b4f504d842e8fa45c3beb86351205891a16273,PodSandboxId:60a80d420056fb10714ec60d9b65582ccd837d370130760a85cca15542b4cd2f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1758071335332100449,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-003341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 527d74da00d6a6d61913ea63691d068d,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernete
s.pod.terminationGracePeriod: 30,},},&Container{Id:333555e32dc3f31b5f185685584e3326582004dd3c02757ed5391fcbd05013a5,PodSandboxId:e055cde0b61b8fbef68da0c39cd1e454bceca7b21cefb3de2365ad3892ef6317,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1758071334986881610,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-003341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7b1508a99ce0ed02c62e794b4b7bc3d,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.ku
bernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e50dfd1f5829bf9fe520d625940dafacd9399aa820576b5ca8cd609c37b57203,PodSandboxId:bdf83652d8a1c8c16653e066bb82f130cfd92cd7dc83c0401ed5fac46f96a4c1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1758071334996787963,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9xthx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e1459c5-1696-4e03-a638-921f1e6c547c,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4aa7dd95863c1e7ffdc2589551415495541f8d5d4e89f4ee106a8a1b1072693d,PodSandboxId:aac61577b13d6aa7fdb45bb1d59f836425006c73e771dce9c18298972c524a1c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_EXITED,CreatedAt:1758071334877583521,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-003341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04d1e7d0337d35447b7033e62317a447,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\
"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12c017f30b706451dfedd28d44162a42e54b53c998b4d03a79a2e78f229bc8c3,PodSandboxId:0afad675e99000cb048506aa1762e470d0286e6ad6c00520e44d7fda411ebdb4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1758071322826330209,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-955n2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79aefa4e-3e77-4863-bc04-390dc327197b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernet
es.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3757b9b1-9526-431c-a360-b267918ce2c2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 01:09:38 pause-003341 crio[3387]: time="2025-09-17 01:09:38.836892340Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ce03ed90-a691-488f-b01f-599985b7219f name=/runtime.v1.RuntimeService/Version
	Sep 17 01:09:38 pause-003341 crio[3387]: time="2025-09-17 01:09:38.836992761Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ce03ed90-a691-488f-b01f-599985b7219f name=/runtime.v1.RuntimeService/Version
	Sep 17 01:09:38 pause-003341 crio[3387]: time="2025-09-17 01:09:38.839509749Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=75a8e2c8-5369-447e-a16c-ad53cc3fdabd name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 01:09:38 pause-003341 crio[3387]: time="2025-09-17 01:09:38.840443587Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758071378840411403,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=75a8e2c8-5369-447e-a16c-ad53cc3fdabd name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 01:09:38 pause-003341 crio[3387]: time="2025-09-17 01:09:38.841305606Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=41d3609f-903a-49f6-891f-02b6933ca2be name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 01:09:38 pause-003341 crio[3387]: time="2025-09-17 01:09:38.841405031Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=41d3609f-903a-49f6-891f-02b6933ca2be name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 01:09:38 pause-003341 crio[3387]: time="2025-09-17 01:09:38.841770421Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:51db5cdd0370ce980a6bcc64bce3a15578324488f0f7aa3f2229b04bad55a942,PodSandboxId:bdf83652d8a1c8c16653e066bb82f130cfd92cd7dc83c0401ed5fac46f96a4c1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1758071364110984485,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9xthx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e1459c5-1696-4e03-a638-921f1e6c547c,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b63088e8180f123f7e5b2adbaaf66c2f5f9f2ecea0273096f52c46811a556f99,PodSandboxId:e055cde0b61b8fbef68da0c39cd1e454bceca7b21cefb3de2365ad3892ef6317,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1758071360511315171,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-003341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7b1508a99ce0ed02c62e794b4b7bc3d,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257
,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79a70955e4b7f57171a2ea9b731283569e648323092e7f15408d6fbca30d0385,PodSandboxId:60a80d420056fb10714ec60d9b65582ccd837d370130760a85cca15542b4cd2f,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1758071360470907558,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-003341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 527d74da00d6a6d61913ea63691d068d,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,
io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2208a1f1a4acb35cf3a0b4bc20c8fcce68d32033c91a0d3fce5bb5dfc70c1c9b,PodSandboxId:74c972177ae29bc7723333b045d32166a830cce8e9ade3e5025eec94fc71984c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1758071360493257425,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-003341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
255972a0efec9de7a8337349e9eb993,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db35cfba9ac2bd3a9b79076c9d536faff9fe56c07347c4203ee6d0811e556928,PodSandboxId:aac61577b13d6aa7fdb45bb1d59f836425006c73e771dce9c18298972c524a1c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1758071360478760761,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernete
s.pod.name: kube-apiserver-pause-003341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04d1e7d0337d35447b7033e62317a447,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5db129cfeb1cd59eb8fbb9db5803bfc99d8cd4002a319fb0931317d3ff6fc999,PodSandboxId:a2daacc4465fe0e66d882ae9c63fa060a7a612565c1d5ffa731330d297919c1c,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:175807
1356476984304,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-955n2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79aefa4e-3e77-4863-bc04-390dc327197b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce8c60df3d44f099c4f0580568abfd3263339a7ed6902dd33ff338e4a1b2aaff,PodSandboxId:74c972177ae29bc7723333b045d32166a830cce8e9ade3e
5025eec94fc71984c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1758071335307450231,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-003341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0255972a0efec9de7a8337349e9eb993,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12e1d4b09635725fe3cc3
618f2b4f504d842e8fa45c3beb86351205891a16273,PodSandboxId:60a80d420056fb10714ec60d9b65582ccd837d370130760a85cca15542b4cd2f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1758071335332100449,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-003341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 527d74da00d6a6d61913ea63691d068d,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernete
s.pod.terminationGracePeriod: 30,},},&Container{Id:333555e32dc3f31b5f185685584e3326582004dd3c02757ed5391fcbd05013a5,PodSandboxId:e055cde0b61b8fbef68da0c39cd1e454bceca7b21cefb3de2365ad3892ef6317,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1758071334986881610,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-003341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7b1508a99ce0ed02c62e794b4b7bc3d,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.ku
bernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e50dfd1f5829bf9fe520d625940dafacd9399aa820576b5ca8cd609c37b57203,PodSandboxId:bdf83652d8a1c8c16653e066bb82f130cfd92cd7dc83c0401ed5fac46f96a4c1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1758071334996787963,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9xthx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e1459c5-1696-4e03-a638-921f1e6c547c,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4aa7dd95863c1e7ffdc2589551415495541f8d5d4e89f4ee106a8a1b1072693d,PodSandboxId:aac61577b13d6aa7fdb45bb1d59f836425006c73e771dce9c18298972c524a1c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_EXITED,CreatedAt:1758071334877583521,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-003341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04d1e7d0337d35447b7033e62317a447,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\
"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12c017f30b706451dfedd28d44162a42e54b53c998b4d03a79a2e78f229bc8c3,PodSandboxId:0afad675e99000cb048506aa1762e470d0286e6ad6c00520e44d7fda411ebdb4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1758071322826330209,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-955n2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79aefa4e-3e77-4863-bc04-390dc327197b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernet
es.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=41d3609f-903a-49f6-891f-02b6933ca2be name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	51db5cdd0370c       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce   14 seconds ago      Running             kube-proxy                3                   bdf83652d8a1c       kube-proxy-9xthx
	b63088e8180f1       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634   18 seconds ago      Running             kube-controller-manager   3                   e055cde0b61b8       kube-controller-manager-pause-003341
	2208a1f1a4acb       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc   18 seconds ago      Running             kube-scheduler            3                   74c972177ae29       kube-scheduler-pause-003341
	db35cfba9ac2b       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90   18 seconds ago      Running             kube-apiserver            3                   aac61577b13d6       kube-apiserver-pause-003341
	79a70955e4b7f       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   18 seconds ago      Running             etcd                      3                   60a80d420056f       etcd-pause-003341
	5db129cfeb1cd       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   22 seconds ago      Running             coredns                   2                   a2daacc4465fe       coredns-66bc5c9577-955n2
	12e1d4b096357       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   43 seconds ago      Exited              etcd                      2                   60a80d420056f       etcd-pause-003341
	ce8c60df3d44f       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc   43 seconds ago      Exited              kube-scheduler            2                   74c972177ae29       kube-scheduler-pause-003341
	e50dfd1f5829b       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce   43 seconds ago      Exited              kube-proxy                2                   bdf83652d8a1c       kube-proxy-9xthx
	333555e32dc3f       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634   43 seconds ago      Exited              kube-controller-manager   2                   e055cde0b61b8       kube-controller-manager-pause-003341
	4aa7dd95863c1       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90   44 seconds ago      Exited              kube-apiserver            2                   aac61577b13d6       kube-apiserver-pause-003341
	12c017f30b706       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   56 seconds ago      Exited              coredns                   1                   0afad675e9900       coredns-66bc5c9577-955n2
	
	
	==> coredns [12c017f30b706451dfedd28d44162a42e54b53c998b4d03a79a2e78f229bc8c3] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ecad3ac8c72227dcf0d7a418ea5051ee155dd74d241a13c4787cc61906568517b5647c8519c78ef2c6b724422ee4b03d6cfb27e9a87140163726e83184faf782
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:46975 - 23717 "HINFO IN 4183112172293320515.7005939120852097983. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.117006672s
	
	
	==> coredns [5db129cfeb1cd59eb8fbb9db5803bfc99d8cd4002a319fb0931317d3ff6fc999] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:37792->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:37766->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:37782->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ecad3ac8c72227dcf0d7a418ea5051ee155dd74d241a13c4787cc61906568517b5647c8519c78ef2c6b724422ee4b03d6cfb27e9a87140163726e83184faf782
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54782 - 38568 "HINFO IN 8916208798569525581.7855192822637624078. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.082294771s
	
	
	==> describe nodes <==
	Name:               pause-003341
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-003341
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=pause-003341
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_17T01_07_23_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Sep 2025 01:07:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-003341
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 01:09:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 01:09:23 +0000   Wed, 17 Sep 2025 01:07:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 01:09:23 +0000   Wed, 17 Sep 2025 01:07:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 01:09:23 +0000   Wed, 17 Sep 2025 01:07:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 01:09:23 +0000   Wed, 17 Sep 2025 01:07:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.83.157
	  Hostname:    pause-003341
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 3cce372e040d49ad910673a91e6bcbb4
	  System UUID:                3cce372e-040d-49ad-9106-73a91e6bcbb4
	  Boot ID:                    f1b2dda5-6d4d-45c6-80f9-b55d0e3d3477
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-955n2                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     2m11s
	  kube-system                 etcd-pause-003341                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         2m17s
	  kube-system                 kube-apiserver-pause-003341             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m17s
	  kube-system                 kube-controller-manager-pause-003341    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m17s
	  kube-system                 kube-proxy-9xthx                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m11s
	  kube-system                 kube-scheduler-pause-003341             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 14s                    kube-proxy       
	  Normal  Starting                 2m9s                   kube-proxy       
	  Normal  NodeAllocatableEnforced  2m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     2m24s (x7 over 2m25s)  kubelet          Node pause-003341 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m24s (x8 over 2m25s)  kubelet          Node pause-003341 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m24s (x8 over 2m25s)  kubelet          Node pause-003341 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m17s                  kubelet          Node pause-003341 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m17s                  kubelet          Node pause-003341 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m17s                  kubelet          Node pause-003341 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m17s                  kubelet          Starting kubelet.
	  Normal  NodeReady                2m16s                  kubelet          Node pause-003341 status is now: NodeReady
	  Normal  RegisteredNode           2m12s                  node-controller  Node pause-003341 event: Registered Node pause-003341 in Controller
	  Normal  Starting                 20s                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  20s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  19s (x8 over 20s)      kubelet          Node pause-003341 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19s (x8 over 20s)      kubelet          Node pause-003341 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19s (x7 over 20s)      kubelet          Node pause-003341 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12s                    node-controller  Node pause-003341 event: Registered Node pause-003341 in Controller
	
	
	==> dmesg <==
	[Sep17 01:06] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000008] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000059] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.006706] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.488246] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000019] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep17 01:07] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.117452] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.166116] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.165948] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.028750] kauditd_printk_skb: 18 callbacks suppressed
	[Sep17 01:08] kauditd_printk_skb: 258 callbacks suppressed
	[  +9.687866] kauditd_printk_skb: 275 callbacks suppressed
	[Sep17 01:09] kauditd_printk_skb: 245 callbacks suppressed
	[  +4.696309] kauditd_printk_skb: 99 callbacks suppressed
	
	
	==> etcd [12e1d4b09635725fe3cc3618f2b4f504d842e8fa45c3beb86351205891a16273] <==
	{"level":"info","ts":"2025-09-17T01:08:57.035452Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-09-17T01:08:57.093136Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.83.157:2379"}
	{"level":"info","ts":"2025-09-17T01:08:57.093622Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-09-17T01:08:57.096039Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-17T01:08:57.096581Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-003341","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.83.157:2380"],"advertise-client-urls":["https://192.168.83.157:2379"]}
	{"level":"warn","ts":"2025-09-17T01:08:57.096741Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41646","server-name":"","error":"read tcp 127.0.0.1:2379->127.0.0.1:41646: use of closed network connection"}
	2025/09/17 01:08:57 WARNING: [core] [Channel #4 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: EOF"
	2025/09/17 01:08:57 WARNING: [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: EOF"
	{"level":"warn","ts":"2025-09-17T01:08:57.101303Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41656","server-name":"","error":"write tcp 127.0.0.1:2379->127.0.0.1:41656: use of closed network connection"}
	{"level":"error","ts":"2025-09-17T01:08:57.105907Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-17T01:08:57.106026Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-17T01:08:57.106124Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-17T01:08:57.106150Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"734c93290a047874","current-leader-member-id":"734c93290a047874"}
	{"level":"info","ts":"2025-09-17T01:08:57.106268Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-09-17T01:08:57.106292Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-09-17T01:08:57.107956Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-17T01:08:57.115146Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-17T01:08:57.115263Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-09-17T01:08:57.115469Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.83.157:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-17T01:08:57.115525Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.83.157:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-17T01:08:57.115856Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.83.157:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-17T01:08:57.119128Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.83.157:2380"}
	{"level":"error","ts":"2025-09-17T01:08:57.119230Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.83.157:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-17T01:08:57.119286Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.83.157:2380"}
	{"level":"info","ts":"2025-09-17T01:08:57.119315Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-003341","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.83.157:2380"],"advertise-client-urls":["https://192.168.83.157:2379"]}
	
	
	==> etcd [79a70955e4b7f57171a2ea9b731283569e648323092e7f15408d6fbca30d0385] <==
	{"level":"warn","ts":"2025-09-17T01:09:22.240289Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T01:09:22.264080Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T01:09:22.275919Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T01:09:22.284732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T01:09:22.301313Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T01:09:22.323853Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T01:09:22.354375Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T01:09:22.391778Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T01:09:22.404093Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T01:09:22.426470Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T01:09:22.449295Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T01:09:22.467164Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T01:09:22.486196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T01:09:22.523317Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T01:09:22.560002Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T01:09:22.567644Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T01:09:22.583707Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T01:09:22.600045Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T01:09:22.617589Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T01:09:22.639993Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T01:09:22.664994Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T01:09:22.700261Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T01:09:22.794866Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T01:09:35.070733Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"123.625699ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8679730973141494354 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.83.157\" mod_revision:463 > success:<request_put:<key:\"/registry/masterleases/192.168.83.157\" value_size:67 lease:8679730973141494352 >> failure:<request_range:<key:\"/registry/masterleases/192.168.83.157\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-09-17T01:09:35.071870Z","caller":"traceutil/trace.go:172","msg":"trace[1390127224] transaction","detail":"{read_only:false; response_revision:521; number_of_response:1; }","duration":"132.661102ms","start":"2025-09-17T01:09:34.938494Z","end":"2025-09-17T01:09:35.071155Z","steps":["trace[1390127224] 'compare'  (duration: 123.469165ms)"],"step_count":1}
	
	
	==> kernel <==
	 01:09:39 up 2 min,  0 users,  load average: 1.44, 0.59, 0.22
	Linux pause-003341 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Sep  9 02:24:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [4aa7dd95863c1e7ffdc2589551415495541f8d5d4e89f4ee106a8a1b1072693d] <==
	W0917 01:08:57.583918       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 01:08:57.584006       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0917 01:08:57.585502       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I0917 01:08:57.619890       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I0917 01:08:57.621032       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I0917 01:08:57.621145       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0917 01:08:57.621365       1 instance.go:239] Using reconciler: lease
	W0917 01:08:57.622504       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 01:08:57.622999       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 01:08:58.585206       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 01:08:58.585206       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 01:08:58.623905       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 01:09:00.287892       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 01:09:00.355658       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 01:09:00.505330       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 01:09:02.372949       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 01:09:02.846260       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 01:09:03.033906       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 01:09:06.617984       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 01:09:07.171472       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 01:09:07.690247       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 01:09:13.233994       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 01:09:13.419292       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 01:09:14.626379       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F0917 01:09:17.622611       1 instance.go:232] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [db35cfba9ac2bd3a9b79076c9d536faff9fe56c07347c4203ee6d0811e556928] <==
	I0917 01:09:23.650027       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0917 01:09:23.659267       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I0917 01:09:23.661673       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0917 01:09:23.664027       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I0917 01:09:23.664086       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I0917 01:09:23.676866       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	E0917 01:09:23.697065       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0917 01:09:23.708714       1 cache.go:39] Caches are synced for autoregister controller
	I0917 01:09:23.710892       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I0917 01:09:23.721409       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0917 01:09:23.721452       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0917 01:09:23.721575       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0917 01:09:23.721684       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0917 01:09:23.721740       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0917 01:09:23.728103       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I0917 01:09:23.730424       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0917 01:09:23.856416       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0917 01:09:23.856465       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0917 01:09:24.434457       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0917 01:09:25.014175       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0917 01:09:25.052480       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0917 01:09:25.085134       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0917 01:09:25.093449       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0917 01:09:27.119479       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0917 01:09:27.299543       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [333555e32dc3f31b5f185685584e3326582004dd3c02757ed5391fcbd05013a5] <==
	I0917 01:08:57.316973       1 serving.go:386] Generated self-signed cert in-memory
	I0917 01:08:57.912934       1 controllermanager.go:191] "Starting" version="v1.34.0"
	I0917 01:08:57.912975       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 01:08:57.917970       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I0917 01:08:57.918687       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0917 01:08:57.919158       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0917 01:08:57.919315       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	
	
	==> kube-controller-manager [b63088e8180f123f7e5b2adbaaf66c2f5f9f2ecea0273096f52c46811a556f99] <==
	I0917 01:09:27.144227       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0917 01:09:27.144923       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0917 01:09:27.144986       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0917 01:09:27.145059       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0917 01:09:27.145093       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0917 01:09:27.145128       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0917 01:09:27.145317       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0917 01:09:27.146589       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0917 01:09:27.147134       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0917 01:09:27.147626       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0917 01:09:27.147716       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0917 01:09:27.149239       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0917 01:09:27.150488       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0917 01:09:27.150647       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0917 01:09:27.150835       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-003341"
	I0917 01:09:27.150927       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0917 01:09:27.153766       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0917 01:09:27.154731       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0917 01:09:27.154766       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0917 01:09:27.154774       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0917 01:09:27.157025       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0917 01:09:27.157781       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0917 01:09:27.160358       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0917 01:09:27.164186       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0917 01:09:27.171721       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	
	
	==> kube-proxy [51db5cdd0370ce980a6bcc64bce3a15578324488f0f7aa3f2229b04bad55a942] <==
	I0917 01:09:24.286094       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0917 01:09:24.386984       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0917 01:09:24.387042       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.83.157"]
	E0917 01:09:24.387136       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 01:09:24.429582       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0917 01:09:24.429650       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0917 01:09:24.429677       1 server_linux.go:132] "Using iptables Proxier"
	I0917 01:09:24.447920       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 01:09:24.448477       1 server.go:527] "Version info" version="v1.34.0"
	I0917 01:09:24.448646       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 01:09:24.456516       1 config.go:200] "Starting service config controller"
	I0917 01:09:24.460011       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0917 01:09:24.458910       1 config.go:106] "Starting endpoint slice config controller"
	I0917 01:09:24.460114       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0917 01:09:24.458934       1 config.go:403] "Starting serviceCIDR config controller"
	I0917 01:09:24.460167       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0917 01:09:24.460904       1 config.go:309] "Starting node config controller"
	I0917 01:09:24.460933       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0917 01:09:24.460939       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0917 01:09:24.561126       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0917 01:09:24.561353       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0917 01:09:24.561368       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [e50dfd1f5829bf9fe520d625940dafacd9399aa820576b5ca8cd609c37b57203] <==
	I0917 01:08:56.716399       1 server_linux.go:53] "Using iptables proxy"
	I0917 01:08:57.114038       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E0917 01:09:07.115913       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dpause-003341&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	
	
	==> kube-scheduler [2208a1f1a4acb35cf3a0b4bc20c8fcce68d32033c91a0d3fce5bb5dfc70c1c9b] <==
	I0917 01:09:21.312048       1 serving.go:386] Generated self-signed cert in-memory
	W0917 01:09:23.524000       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0917 01:09:23.524086       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0917 01:09:23.524110       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0917 01:09:23.524126       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0917 01:09:23.652739       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0917 01:09:23.656730       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 01:09:23.660637       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0917 01:09:23.660691       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 01:09:23.662996       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 01:09:23.660717       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0917 01:09:23.763418       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [ce8c60df3d44f099c4f0580568abfd3263339a7ed6902dd33ff338e4a1b2aaff] <==
	I0917 01:08:57.662078       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kubelet <==
	Sep 17 01:09:22 pause-003341 kubelet[4588]: E0917 01:09:22.229030    4588 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-003341\" not found" node="pause-003341"
	Sep 17 01:09:22 pause-003341 kubelet[4588]: E0917 01:09:22.230367    4588 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-003341\" not found" node="pause-003341"
	Sep 17 01:09:23 pause-003341 kubelet[4588]: E0917 01:09:23.236531    4588 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-003341\" not found" node="pause-003341"
	Sep 17 01:09:23 pause-003341 kubelet[4588]: E0917 01:09:23.236989    4588 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-003341\" not found" node="pause-003341"
	Sep 17 01:09:23 pause-003341 kubelet[4588]: E0917 01:09:23.237311    4588 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-003341\" not found" node="pause-003341"
	Sep 17 01:09:23 pause-003341 kubelet[4588]: E0917 01:09:23.239139    4588 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-003341\" not found" node="pause-003341"
	Sep 17 01:09:23 pause-003341 kubelet[4588]: I0917 01:09:23.494650    4588 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-003341"
	Sep 17 01:09:23 pause-003341 kubelet[4588]: I0917 01:09:23.709768    4588 kubelet_node_status.go:124] "Node was previously registered" node="pause-003341"
	Sep 17 01:09:23 pause-003341 kubelet[4588]: I0917 01:09:23.709919    4588 kubelet_node_status.go:78] "Successfully registered node" node="pause-003341"
	Sep 17 01:09:23 pause-003341 kubelet[4588]: I0917 01:09:23.709959    4588 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 17 01:09:23 pause-003341 kubelet[4588]: I0917 01:09:23.711482    4588 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 17 01:09:23 pause-003341 kubelet[4588]: E0917 01:09:23.751564    4588 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-pause-003341\" already exists" pod="kube-system/etcd-pause-003341"
	Sep 17 01:09:23 pause-003341 kubelet[4588]: I0917 01:09:23.751631    4588 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-003341"
	Sep 17 01:09:23 pause-003341 kubelet[4588]: E0917 01:09:23.762491    4588 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-003341\" already exists" pod="kube-system/kube-apiserver-pause-003341"
	Sep 17 01:09:23 pause-003341 kubelet[4588]: I0917 01:09:23.762517    4588 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-003341"
	Sep 17 01:09:23 pause-003341 kubelet[4588]: E0917 01:09:23.773727    4588 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-003341\" already exists" pod="kube-system/kube-controller-manager-pause-003341"
	Sep 17 01:09:23 pause-003341 kubelet[4588]: I0917 01:09:23.773754    4588 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-003341"
	Sep 17 01:09:23 pause-003341 kubelet[4588]: E0917 01:09:23.784114    4588 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-003341\" already exists" pod="kube-system/kube-scheduler-pause-003341"
	Sep 17 01:09:23 pause-003341 kubelet[4588]: I0917 01:09:23.786181    4588 apiserver.go:52] "Watching apiserver"
	Sep 17 01:09:23 pause-003341 kubelet[4588]: I0917 01:09:23.797896    4588 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 17 01:09:23 pause-003341 kubelet[4588]: I0917 01:09:23.836030    4588 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9e1459c5-1696-4e03-a638-921f1e6c547c-xtables-lock\") pod \"kube-proxy-9xthx\" (UID: \"9e1459c5-1696-4e03-a638-921f1e6c547c\") " pod="kube-system/kube-proxy-9xthx"
	Sep 17 01:09:23 pause-003341 kubelet[4588]: I0917 01:09:23.836178    4588 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9e1459c5-1696-4e03-a638-921f1e6c547c-lib-modules\") pod \"kube-proxy-9xthx\" (UID: \"9e1459c5-1696-4e03-a638-921f1e6c547c\") " pod="kube-system/kube-proxy-9xthx"
	Sep 17 01:09:24 pause-003341 kubelet[4588]: I0917 01:09:24.092091    4588 scope.go:117] "RemoveContainer" containerID="e50dfd1f5829bf9fe520d625940dafacd9399aa820576b5ca8cd609c37b57203"
	Sep 17 01:09:29 pause-003341 kubelet[4588]: E0917 01:09:29.946898    4588 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758071369946217054  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Sep 17 01:09:29 pause-003341 kubelet[4588]: E0917 01:09:29.946953    4588 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758071369946217054  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-003341 -n pause-003341
helpers_test.go:269: (dbg) Run:  kubectl --context pause-003341 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-003341 -n pause-003341
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-003341 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-003341 logs -n 25: (1.878405669s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                ARGS                                                                                │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-733841 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                              │ cilium-733841             │ jenkins │ v1.37.0 │ 17 Sep 25 01:08 UTC │                     │
	│ ssh     │ -p cilium-733841 sudo cat /etc/kubernetes/kubelet.conf                                                                                                             │ cilium-733841             │ jenkins │ v1.37.0 │ 17 Sep 25 01:08 UTC │                     │
	│ ssh     │ -p cilium-733841 sudo cat /var/lib/kubelet/config.yaml                                                                                                             │ cilium-733841             │ jenkins │ v1.37.0 │ 17 Sep 25 01:08 UTC │                     │
	│ ssh     │ -p cilium-733841 sudo systemctl status docker --all --full --no-pager                                                                                              │ cilium-733841             │ jenkins │ v1.37.0 │ 17 Sep 25 01:08 UTC │                     │
	│ ssh     │ -p cilium-733841 sudo systemctl cat docker --no-pager                                                                                                              │ cilium-733841             │ jenkins │ v1.37.0 │ 17 Sep 25 01:08 UTC │                     │
	│ ssh     │ -p cilium-733841 sudo cat /etc/docker/daemon.json                                                                                                                  │ cilium-733841             │ jenkins │ v1.37.0 │ 17 Sep 25 01:08 UTC │                     │
	│ ssh     │ -p cilium-733841 sudo docker system info                                                                                                                           │ cilium-733841             │ jenkins │ v1.37.0 │ 17 Sep 25 01:08 UTC │                     │
	│ ssh     │ -p cilium-733841 sudo systemctl status cri-docker --all --full --no-pager                                                                                          │ cilium-733841             │ jenkins │ v1.37.0 │ 17 Sep 25 01:08 UTC │                     │
	│ ssh     │ -p cilium-733841 sudo systemctl cat cri-docker --no-pager                                                                                                          │ cilium-733841             │ jenkins │ v1.37.0 │ 17 Sep 25 01:08 UTC │                     │
	│ ssh     │ -p cilium-733841 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                     │ cilium-733841             │ jenkins │ v1.37.0 │ 17 Sep 25 01:08 UTC │                     │
	│ ssh     │ -p cilium-733841 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                               │ cilium-733841             │ jenkins │ v1.37.0 │ 17 Sep 25 01:08 UTC │                     │
	│ ssh     │ -p cilium-733841 sudo cri-dockerd --version                                                                                                                        │ cilium-733841             │ jenkins │ v1.37.0 │ 17 Sep 25 01:08 UTC │                     │
	│ ssh     │ -p cilium-733841 sudo systemctl status containerd --all --full --no-pager                                                                                          │ cilium-733841             │ jenkins │ v1.37.0 │ 17 Sep 25 01:08 UTC │                     │
	│ ssh     │ -p cilium-733841 sudo systemctl cat containerd --no-pager                                                                                                          │ cilium-733841             │ jenkins │ v1.37.0 │ 17 Sep 25 01:08 UTC │                     │
	│ ssh     │ -p cilium-733841 sudo cat /lib/systemd/system/containerd.service                                                                                                   │ cilium-733841             │ jenkins │ v1.37.0 │ 17 Sep 25 01:08 UTC │                     │
	│ ssh     │ -p cilium-733841 sudo cat /etc/containerd/config.toml                                                                                                              │ cilium-733841             │ jenkins │ v1.37.0 │ 17 Sep 25 01:08 UTC │                     │
	│ ssh     │ -p cilium-733841 sudo containerd config dump                                                                                                                       │ cilium-733841             │ jenkins │ v1.37.0 │ 17 Sep 25 01:08 UTC │                     │
	│ ssh     │ -p cilium-733841 sudo systemctl status crio --all --full --no-pager                                                                                                │ cilium-733841             │ jenkins │ v1.37.0 │ 17 Sep 25 01:08 UTC │                     │
	│ ssh     │ -p cilium-733841 sudo systemctl cat crio --no-pager                                                                                                                │ cilium-733841             │ jenkins │ v1.37.0 │ 17 Sep 25 01:08 UTC │                     │
	│ ssh     │ -p cilium-733841 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                      │ cilium-733841             │ jenkins │ v1.37.0 │ 17 Sep 25 01:08 UTC │                     │
	│ ssh     │ -p cilium-733841 sudo crio config                                                                                                                                  │ cilium-733841             │ jenkins │ v1.37.0 │ 17 Sep 25 01:08 UTC │                     │
	│ delete  │ -p cilium-733841                                                                                                                                                   │ cilium-733841             │ jenkins │ v1.37.0 │ 17 Sep 25 01:08 UTC │ 17 Sep 25 01:08 UTC │
	│ start   │ -p stopped-upgrade-369624 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                     │ stopped-upgrade-369624    │ jenkins │ v1.32.0 │ 17 Sep 25 01:08 UTC │                     │
	│ stop    │ -p kubernetes-upgrade-661366                                                                                                                                       │ kubernetes-upgrade-661366 │ jenkins │ v1.37.0 │ 17 Sep 25 01:09 UTC │ 17 Sep 25 01:09 UTC │
	│ start   │ -p kubernetes-upgrade-661366 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ kubernetes-upgrade-661366 │ jenkins │ v1.37.0 │ 17 Sep 25 01:09 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/17 01:09:31
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 01:09:31.232102  186335 out.go:360] Setting OutFile to fd 1 ...
	I0917 01:09:31.232390  186335 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 01:09:31.232402  186335 out.go:374] Setting ErrFile to fd 2...
	I0917 01:09:31.232410  186335 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 01:09:31.232644  186335 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-141589/.minikube/bin
	I0917 01:09:31.233309  186335 out.go:368] Setting JSON to false
	I0917 01:09:31.234504  186335 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":13915,"bootTime":1758057456,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 01:09:31.234622  186335 start.go:140] virtualization: kvm guest
	I0917 01:09:31.236824  186335 out.go:179] * [kubernetes-upgrade-661366] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0917 01:09:31.238867  186335 out.go:179]   - MINIKUBE_LOCATION=21550
	I0917 01:09:31.238872  186335 notify.go:220] Checking for updates...
	I0917 01:09:31.240723  186335 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 01:09:31.242239  186335 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-141589/kubeconfig
	I0917 01:09:31.243657  186335 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-141589/.minikube
	I0917 01:09:31.245129  186335 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 01:09:31.246475  186335 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 01:09:31.248623  186335 config.go:182] Loaded profile config "kubernetes-upgrade-661366": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I0917 01:09:31.249302  186335 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 01:09:31.249403  186335 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 01:09:31.268918  186335 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41789
	I0917 01:09:31.269629  186335 main.go:141] libmachine: () Calling .GetVersion
	I0917 01:09:31.270300  186335 main.go:141] libmachine: Using API Version  1
	I0917 01:09:31.270340  186335 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 01:09:31.271120  186335 main.go:141] libmachine: () Calling .GetMachineName
	I0917 01:09:31.271347  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) Calling .DriverName
	I0917 01:09:31.271765  186335 driver.go:421] Setting default libvirt URI to qemu:///system
	I0917 01:09:31.272396  186335 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 01:09:31.272458  186335 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 01:09:31.294507  186335 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40861
	I0917 01:09:31.295021  186335 main.go:141] libmachine: () Calling .GetVersion
	I0917 01:09:31.295700  186335 main.go:141] libmachine: Using API Version  1
	I0917 01:09:31.295750  186335 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 01:09:31.296314  186335 main.go:141] libmachine: () Calling .GetMachineName
	I0917 01:09:31.296603  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) Calling .DriverName
	I0917 01:09:31.346483  186335 out.go:179] * Using the kvm2 driver based on existing profile
	I0917 01:09:31.347705  186335 start.go:304] selected driver: kvm2
	I0917 01:09:31.347728  186335 start.go:918] validating driver "kvm2" against &{Name:kubernetes-upgrade-661366 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.37.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-up
grade-661366 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.189 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:
false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 01:09:31.347885  186335 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 01:09:31.348790  186335 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 01:09:31.348910  186335 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21550-141589/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0917 01:09:31.364832  186335 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0917 01:09:31.365319  186335 cni.go:84] Creating CNI manager for ""
	I0917 01:09:31.365383  186335 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 01:09:31.365419  186335 start.go:348] cluster config:
	{Name:kubernetes-upgrade-661366 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.37.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:kubernetes-upgrade-661366 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.189 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: S
ocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 01:09:31.365528  186335 iso.go:125] acquiring lock: {Name:mkbc497934aeda3bf1eaa3e96176da91d2f10b30 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 01:09:31.367478  186335 out.go:179] * Starting "kubernetes-upgrade-661366" primary control-plane node in "kubernetes-upgrade-661366" cluster
	I0917 01:09:31.368619  186335 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 01:09:31.368666  186335 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21550-141589/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0917 01:09:31.368678  186335 cache.go:58] Caching tarball of preloaded images
	I0917 01:09:31.368792  186335 preload.go:172] Found /home/jenkins/minikube-integration/21550-141589/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0917 01:09:31.368809  186335 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0917 01:09:31.368937  186335 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/kubernetes-upgrade-661366/config.json ...
	I0917 01:09:31.369144  186335 start.go:360] acquireMachinesLock for kubernetes-upgrade-661366: {Name:mk4898504d31cc722a10b1787754ef8ecd27d0ff Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0917 01:09:31.369213  186335 start.go:364] duration metric: took 30.56µs to acquireMachinesLock for "kubernetes-upgrade-661366"
	I0917 01:09:31.369233  186335 start.go:96] Skipping create...Using existing machine configuration
	I0917 01:09:31.369240  186335 fix.go:54] fixHost starting: 
	I0917 01:09:31.369641  186335 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 01:09:31.369694  186335 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 01:09:31.383680  186335 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44685
	I0917 01:09:31.384203  186335 main.go:141] libmachine: () Calling .GetVersion
	I0917 01:09:31.384700  186335 main.go:141] libmachine: Using API Version  1
	I0917 01:09:31.384724  186335 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 01:09:31.385166  186335 main.go:141] libmachine: () Calling .GetMachineName
	I0917 01:09:31.385444  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) Calling .DriverName
	I0917 01:09:31.385616  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) Calling .GetState
	I0917 01:09:31.387649  186335 fix.go:112] recreateIfNeeded on kubernetes-upgrade-661366: state=Stopped err=<nil>
	I0917 01:09:31.387689  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) Calling .DriverName
	W0917 01:09:31.387915  186335 fix.go:138] unexpected machine state, will restart: <nil>
	W0917 01:09:29.551984  183156 pod_ready.go:104] pod "etcd-pause-003341" is not "Ready", error: <nil>
	W0917 01:09:32.053393  183156 pod_ready.go:104] pod "etcd-pause-003341" is not "Ready", error: <nil>
	I0917 01:09:29.660141  185907 main.go:141] libmachine: (stopped-upgrade-369624) Calling .GetIP
	I0917 01:09:29.663629  185907 main.go:141] libmachine: (stopped-upgrade-369624) DBG | domain stopped-upgrade-369624 has defined MAC address 52:54:00:5d:06:34 in network mk-stopped-upgrade-369624
	I0917 01:09:29.663988  185907 main.go:141] libmachine: (stopped-upgrade-369624) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:06:34", ip: ""} in network mk-stopped-upgrade-369624: {Iface:virbr4 ExpiryTime:2025-09-17 02:09:22 +0000 UTC Type:0 Mac:52:54:00:5d:06:34 Iaid: IPaddr:192.168.61.95 Prefix:24 Hostname:stopped-upgrade-369624 Clientid:01:52:54:00:5d:06:34}
	I0917 01:09:29.664015  185907 main.go:141] libmachine: (stopped-upgrade-369624) DBG | domain stopped-upgrade-369624 has defined IP address 192.168.61.95 and MAC address 52:54:00:5d:06:34 in network mk-stopped-upgrade-369624
	I0917 01:09:29.664404  185907 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0917 01:09:29.669134  185907 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 01:09:29.682319  185907 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I0917 01:09:29.682371  185907 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 01:09:29.722336  185907 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.3". assuming images are not preloaded.
	I0917 01:09:29.722415  185907 ssh_runner.go:195] Run: which lz4
	I0917 01:09:29.726565  185907 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0917 01:09:29.730766  185907 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0917 01:09:29.730799  185907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-141589/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457879245 bytes)
	I0917 01:09:31.353694  185907 crio.go:444] Took 1.627184 seconds to copy over tarball
	I0917 01:09:31.353750  185907 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0917 01:09:31.390027  186335 out.go:252] * Restarting existing kvm2 VM for "kubernetes-upgrade-661366" ...
	I0917 01:09:31.390063  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) Calling .Start
	I0917 01:09:31.390291  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) starting domain...
	I0917 01:09:31.390318  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) ensuring networks are active...
	I0917 01:09:31.391451  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) Ensuring network default is active
	I0917 01:09:31.392015  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) Ensuring network mk-kubernetes-upgrade-661366 is active
	I0917 01:09:31.392577  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) getting domain XML...
	I0917 01:09:31.394203  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG | starting domain XML:
	I0917 01:09:31.394225  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG | <domain type='kvm'>
	I0917 01:09:31.394238  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |   <name>kubernetes-upgrade-661366</name>
	I0917 01:09:31.394253  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |   <uuid>9775ae9b-3a7a-4285-882d-c3410731e728</uuid>
	I0917 01:09:31.394263  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |   <memory unit='KiB'>3145728</memory>
	I0917 01:09:31.394275  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |   <currentMemory unit='KiB'>3145728</currentMemory>
	I0917 01:09:31.394284  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |   <vcpu placement='static'>2</vcpu>
	I0917 01:09:31.394294  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |   <os>
	I0917 01:09:31.394305  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I0917 01:09:31.394315  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |     <boot dev='cdrom'/>
	I0917 01:09:31.394324  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |     <boot dev='hd'/>
	I0917 01:09:31.394335  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |     <bootmenu enable='no'/>
	I0917 01:09:31.394366  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |   </os>
	I0917 01:09:31.394411  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |   <features>
	I0917 01:09:31.394427  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |     <acpi/>
	I0917 01:09:31.394434  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |     <apic/>
	I0917 01:09:31.394446  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |     <pae/>
	I0917 01:09:31.394454  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |   </features>
	I0917 01:09:31.394469  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I0917 01:09:31.394494  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |   <clock offset='utc'/>
	I0917 01:09:31.394500  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |   <on_poweroff>destroy</on_poweroff>
	I0917 01:09:31.394508  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |   <on_reboot>restart</on_reboot>
	I0917 01:09:31.394513  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |   <on_crash>destroy</on_crash>
	I0917 01:09:31.394544  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |   <devices>
	I0917 01:09:31.394565  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I0917 01:09:31.394574  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |     <disk type='file' device='cdrom'>
	I0917 01:09:31.394583  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |       <driver name='qemu' type='raw'/>
	I0917 01:09:31.394602  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |       <source file='/home/jenkins/minikube-integration/21550-141589/.minikube/machines/kubernetes-upgrade-661366/boot2docker.iso'/>
	I0917 01:09:31.394611  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |       <target dev='hdc' bus='scsi'/>
	I0917 01:09:31.394620  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |       <readonly/>
	I0917 01:09:31.394634  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I0917 01:09:31.394645  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |     </disk>
	I0917 01:09:31.394656  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |     <disk type='file' device='disk'>
	I0917 01:09:31.394668  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I0917 01:09:31.394684  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |       <source file='/home/jenkins/minikube-integration/21550-141589/.minikube/machines/kubernetes-upgrade-661366/kubernetes-upgrade-661366.rawdisk'/>
	I0917 01:09:31.394697  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |       <target dev='hda' bus='virtio'/>
	I0917 01:09:31.394710  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I0917 01:09:31.394721  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |     </disk>
	I0917 01:09:31.394730  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I0917 01:09:31.394747  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I0917 01:09:31.394757  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |     </controller>
	I0917 01:09:31.394767  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I0917 01:09:31.394779  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I0917 01:09:31.394791  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I0917 01:09:31.394811  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |     </controller>
	I0917 01:09:31.394823  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |     <interface type='network'>
	I0917 01:09:31.394835  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |       <mac address='52:54:00:53:b6:e4'/>
	I0917 01:09:31.394848  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |       <source network='mk-kubernetes-upgrade-661366'/>
	I0917 01:09:31.394872  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |       <model type='virtio'/>
	I0917 01:09:31.394889  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I0917 01:09:31.394900  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |     </interface>
	I0917 01:09:31.394909  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |     <interface type='network'>
	I0917 01:09:31.394920  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |       <mac address='52:54:00:73:5e:5e'/>
	I0917 01:09:31.394932  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |       <source network='default'/>
	I0917 01:09:31.394942  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |       <model type='virtio'/>
	I0917 01:09:31.394956  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I0917 01:09:31.394967  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |     </interface>
	I0917 01:09:31.394976  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |     <serial type='pty'>
	I0917 01:09:31.394988  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |       <target type='isa-serial' port='0'>
	I0917 01:09:31.394997  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |         <model name='isa-serial'/>
	I0917 01:09:31.395015  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |       </target>
	I0917 01:09:31.395027  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |     </serial>
	I0917 01:09:31.395034  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |     <console type='pty'>
	I0917 01:09:31.395093  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |       <target type='serial' port='0'/>
	I0917 01:09:31.395121  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |     </console>
	I0917 01:09:31.395139  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |     <input type='mouse' bus='ps2'/>
	I0917 01:09:31.395158  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |     <input type='keyboard' bus='ps2'/>
	I0917 01:09:31.395170  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |     <audio id='1' type='none'/>
	I0917 01:09:31.395180  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |     <memballoon model='virtio'>
	I0917 01:09:31.395192  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I0917 01:09:31.395203  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |     </memballoon>
	I0917 01:09:31.395213  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |     <rng model='virtio'>
	I0917 01:09:31.395222  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |       <backend model='random'>/dev/random</backend>
	I0917 01:09:31.395233  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I0917 01:09:31.395241  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |     </rng>
	I0917 01:09:31.395257  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG |   </devices>
	I0917 01:09:31.395272  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG | </domain>
	I0917 01:09:31.395284  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG | 
	I0917 01:09:32.944733  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) waiting for domain to start...
	I0917 01:09:32.946437  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) domain is now running
	I0917 01:09:32.946477  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) waiting for IP...
	I0917 01:09:32.947714  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG | domain kubernetes-upgrade-661366 has defined MAC address 52:54:00:53:b6:e4 in network mk-kubernetes-upgrade-661366
	I0917 01:09:32.948498  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) found domain IP: 192.168.50.189
	I0917 01:09:32.948538  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) reserving static IP address...
	I0917 01:09:32.948578  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG | domain kubernetes-upgrade-661366 has current primary IP address 192.168.50.189 and MAC address 52:54:00:53:b6:e4 in network mk-kubernetes-upgrade-661366
	I0917 01:09:32.949081  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG | found host DHCP lease matching {name: "kubernetes-upgrade-661366", mac: "52:54:00:53:b6:e4", ip: "192.168.50.189"} in network mk-kubernetes-upgrade-661366: {Iface:virbr2 ExpiryTime:2025-09-17 02:09:01 +0000 UTC Type:0 Mac:52:54:00:53:b6:e4 Iaid: IPaddr:192.168.50.189 Prefix:24 Hostname:kubernetes-upgrade-661366 Clientid:01:52:54:00:53:b6:e4}
	I0917 01:09:32.949121  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) reserved static IP address 192.168.50.189 for domain kubernetes-upgrade-661366
	I0917 01:09:32.949142  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG | skip adding static IP to network mk-kubernetes-upgrade-661366 - found existing host DHCP lease matching {name: "kubernetes-upgrade-661366", mac: "52:54:00:53:b6:e4", ip: "192.168.50.189"}
	I0917 01:09:32.949157  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG | Getting to WaitForSSH function...
	I0917 01:09:32.949171  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) waiting for SSH...
	I0917 01:09:32.952313  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG | domain kubernetes-upgrade-661366 has defined MAC address 52:54:00:53:b6:e4 in network mk-kubernetes-upgrade-661366
	I0917 01:09:32.952890  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:b6:e4", ip: ""} in network mk-kubernetes-upgrade-661366: {Iface:virbr2 ExpiryTime:2025-09-17 02:09:01 +0000 UTC Type:0 Mac:52:54:00:53:b6:e4 Iaid: IPaddr:192.168.50.189 Prefix:24 Hostname:kubernetes-upgrade-661366 Clientid:01:52:54:00:53:b6:e4}
	I0917 01:09:32.952922  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG | domain kubernetes-upgrade-661366 has defined IP address 192.168.50.189 and MAC address 52:54:00:53:b6:e4 in network mk-kubernetes-upgrade-661366
	I0917 01:09:32.953177  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG | Using SSH client type: external
	I0917 01:09:32.953214  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG | Using SSH private key: /home/jenkins/minikube-integration/21550-141589/.minikube/machines/kubernetes-upgrade-661366/id_rsa (-rw-------)
	I0917 01:09:32.953258  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.189 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21550-141589/.minikube/machines/kubernetes-upgrade-661366/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0917 01:09:32.953278  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG | About to run SSH command:
	I0917 01:09:32.953306  186335 main.go:141] libmachine: (kubernetes-upgrade-661366) DBG | exit 0
	W0917 01:09:34.055264  183156 pod_ready.go:104] pod "etcd-pause-003341" is not "Ready", error: <nil>
	W0917 01:09:36.550883  183156 pod_ready.go:104] pod "etcd-pause-003341" is not "Ready", error: <nil>
	I0917 01:09:37.050222  183156 pod_ready.go:94] pod "etcd-pause-003341" is "Ready"
	I0917 01:09:37.050260  183156 pod_ready.go:86] duration metric: took 11.507079689s for pod "etcd-pause-003341" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:09:37.053670  183156 pod_ready.go:83] waiting for pod "kube-apiserver-pause-003341" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:09:37.058900  183156 pod_ready.go:94] pod "kube-apiserver-pause-003341" is "Ready"
	I0917 01:09:37.058930  183156 pod_ready.go:86] duration metric: took 5.229317ms for pod "kube-apiserver-pause-003341" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:09:37.062409  183156 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-003341" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:09:37.068791  183156 pod_ready.go:94] pod "kube-controller-manager-pause-003341" is "Ready"
	I0917 01:09:37.068824  183156 pod_ready.go:86] duration metric: took 6.384026ms for pod "kube-controller-manager-pause-003341" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:09:37.072569  183156 pod_ready.go:83] waiting for pod "kube-proxy-9xthx" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:09:37.247164  183156 pod_ready.go:94] pod "kube-proxy-9xthx" is "Ready"
	I0917 01:09:37.247201  183156 pod_ready.go:86] duration metric: took 174.603076ms for pod "kube-proxy-9xthx" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:09:37.447025  183156 pod_ready.go:83] waiting for pod "kube-scheduler-pause-003341" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:09:37.847451  183156 pod_ready.go:94] pod "kube-scheduler-pause-003341" is "Ready"
	I0917 01:09:37.847482  183156 pod_ready.go:86] duration metric: took 400.420287ms for pod "kube-scheduler-pause-003341" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 01:09:37.847494  183156 pod_ready.go:40] duration metric: took 12.321499231s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0917 01:09:37.900943  183156 start.go:617] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0917 01:09:37.905022  183156 out.go:179] * Done! kubectl is now configured to use "pause-003341" cluster and "default" namespace by default
	I0917 01:09:34.521776  185907 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.167992965s)
	I0917 01:09:34.521801  185907 crio.go:451] Took 3.168088 seconds to extract the tarball
	I0917 01:09:34.521824  185907 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0917 01:09:34.567654  185907 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 01:09:34.647752  185907 crio.go:496] all images are preloaded for cri-o runtime.
	I0917 01:09:34.647772  185907 cache_images.go:84] Images are preloaded, skipping loading
	I0917 01:09:34.647837  185907 ssh_runner.go:195] Run: crio config
	I0917 01:09:34.713713  185907 cni.go:84] Creating CNI manager for ""
	I0917 01:09:34.713724  185907 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0917 01:09:34.713742  185907 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0917 01:09:34.713760  185907 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.95 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-369624 NodeName:stopped-upgrade-369624 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.95"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.95 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 01:09:34.713948  185907 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.95
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "stopped-upgrade-369624"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.95
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.95"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 01:09:34.714067  185907 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=stopped-upgrade-369624 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.95
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:stopped-upgrade-369624 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0917 01:09:34.714140  185907 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I0917 01:09:34.727471  185907 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 01:09:34.727555  185907 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 01:09:34.739372  185907 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (381 bytes)
	I0917 01:09:34.757994  185907 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 01:09:34.776937  185907 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0917 01:09:34.796364  185907 ssh_runner.go:195] Run: grep 192.168.61.95	control-plane.minikube.internal$ /etc/hosts
	I0917 01:09:34.800776  185907 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.95	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 01:09:34.814260  185907 certs.go:56] Setting up /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/stopped-upgrade-369624 for IP: 192.168.61.95
	I0917 01:09:34.814287  185907 certs.go:190] acquiring lock for shared ca certs: {Name:mk9185d5103eebb4e8c41dd45f840888861a3f37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:09:34.814473  185907 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/21550-141589/.minikube/ca.key
	I0917 01:09:34.814511  185907 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/21550-141589/.minikube/proxy-client-ca.key
	I0917 01:09:34.814555  185907 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/stopped-upgrade-369624/client.key
	I0917 01:09:34.814564  185907 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/stopped-upgrade-369624/client.crt with IP's: []
	I0917 01:09:35.135412  185907 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/stopped-upgrade-369624/client.crt ...
	I0917 01:09:35.135433  185907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/stopped-upgrade-369624/client.crt: {Name:mka97820dc74d20c2c74ebc61190f57a826434ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:09:35.135631  185907 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/stopped-upgrade-369624/client.key ...
	I0917 01:09:35.135644  185907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/stopped-upgrade-369624/client.key: {Name:mka06c23ff52b9fa48fbbd2391d84f252e8e0140 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:09:35.135727  185907 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/stopped-upgrade-369624/apiserver.key.3bdba0d9
	I0917 01:09:35.135737  185907 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/stopped-upgrade-369624/apiserver.crt.3bdba0d9 with IP's: [192.168.61.95 10.96.0.1 127.0.0.1 10.0.0.1]
	I0917 01:09:35.412348  185907 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/stopped-upgrade-369624/apiserver.crt.3bdba0d9 ...
	I0917 01:09:35.412374  185907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/stopped-upgrade-369624/apiserver.crt.3bdba0d9: {Name:mk56342b1a6189599d0e4373d264ca671c3875f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:09:35.412591  185907 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/stopped-upgrade-369624/apiserver.key.3bdba0d9 ...
	I0917 01:09:35.412606  185907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/stopped-upgrade-369624/apiserver.key.3bdba0d9: {Name:mke503ac16f6fc64bf337b6f8d75998cd22c0348 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:09:35.412714  185907 certs.go:337] copying /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/stopped-upgrade-369624/apiserver.crt.3bdba0d9 -> /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/stopped-upgrade-369624/apiserver.crt
	I0917 01:09:35.412814  185907 certs.go:341] copying /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/stopped-upgrade-369624/apiserver.key.3bdba0d9 -> /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/stopped-upgrade-369624/apiserver.key
	I0917 01:09:35.412881  185907 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/stopped-upgrade-369624/proxy-client.key
	I0917 01:09:35.412893  185907 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/stopped-upgrade-369624/proxy-client.crt with IP's: []
	I0917 01:09:35.504731  185907 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/stopped-upgrade-369624/proxy-client.crt ...
	I0917 01:09:35.504749  185907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/stopped-upgrade-369624/proxy-client.crt: {Name:mkd3b13ca3347657e8691fe02c5ed97448eaedf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:09:35.504985  185907 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/stopped-upgrade-369624/proxy-client.key ...
	I0917 01:09:35.505000  185907 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/stopped-upgrade-369624/proxy-client.key: {Name:mkde582c94cff3692ccd897c3d838e45f84d81db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 01:09:35.505208  185907 certs.go:437] found cert: /home/jenkins/minikube-integration/21550-141589/.minikube/certs/home/jenkins/minikube-integration/21550-141589/.minikube/certs/145530.pem (1338 bytes)
	W0917 01:09:35.505242  185907 certs.go:433] ignoring /home/jenkins/minikube-integration/21550-141589/.minikube/certs/home/jenkins/minikube-integration/21550-141589/.minikube/certs/145530_empty.pem, impossibly tiny 0 bytes
	I0917 01:09:35.505251  185907 certs.go:437] found cert: /home/jenkins/minikube-integration/21550-141589/.minikube/certs/home/jenkins/minikube-integration/21550-141589/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 01:09:35.505271  185907 certs.go:437] found cert: /home/jenkins/minikube-integration/21550-141589/.minikube/certs/home/jenkins/minikube-integration/21550-141589/.minikube/certs/ca.pem (1078 bytes)
	I0917 01:09:35.505299  185907 certs.go:437] found cert: /home/jenkins/minikube-integration/21550-141589/.minikube/certs/home/jenkins/minikube-integration/21550-141589/.minikube/certs/cert.pem (1123 bytes)
	I0917 01:09:35.505318  185907 certs.go:437] found cert: /home/jenkins/minikube-integration/21550-141589/.minikube/certs/home/jenkins/minikube-integration/21550-141589/.minikube/certs/key.pem (1675 bytes)
	I0917 01:09:35.505355  185907 certs.go:437] found cert: /home/jenkins/minikube-integration/21550-141589/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/21550-141589/.minikube/files/etc/ssl/certs/1455302.pem (1708 bytes)
	I0917 01:09:35.505996  185907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/stopped-upgrade-369624/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0917 01:09:35.532621  185907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/stopped-upgrade-369624/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 01:09:35.558078  185907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/stopped-upgrade-369624/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 01:09:35.584724  185907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/stopped-upgrade-369624/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 01:09:35.611354  185907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-141589/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 01:09:35.637427  185907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-141589/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 01:09:35.663190  185907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-141589/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 01:09:35.690865  185907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-141589/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 01:09:35.714407  185907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-141589/.minikube/certs/145530.pem --> /usr/share/ca-certificates/145530.pem (1338 bytes)
	I0917 01:09:35.743200  185907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-141589/.minikube/files/etc/ssl/certs/1455302.pem --> /usr/share/ca-certificates/1455302.pem (1708 bytes)
	I0917 01:09:35.772166  185907 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-141589/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 01:09:35.798993  185907 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 01:09:35.817981  185907 ssh_runner.go:195] Run: openssl version
	I0917 01:09:35.823763  185907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145530.pem && ln -fs /usr/share/ca-certificates/145530.pem /etc/ssl/certs/145530.pem"
	I0917 01:09:35.833819  185907 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145530.pem
	I0917 01:09:35.838899  185907 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 17 00:07 /usr/share/ca-certificates/145530.pem
	I0917 01:09:35.838964  185907 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145530.pem
	I0917 01:09:35.844946  185907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/145530.pem /etc/ssl/certs/51391683.0"
	I0917 01:09:35.855296  185907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1455302.pem && ln -fs /usr/share/ca-certificates/1455302.pem /etc/ssl/certs/1455302.pem"
	I0917 01:09:35.865579  185907 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1455302.pem
	I0917 01:09:35.870246  185907 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 17 00:07 /usr/share/ca-certificates/1455302.pem
	I0917 01:09:35.870309  185907 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1455302.pem
	I0917 01:09:35.875671  185907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1455302.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 01:09:35.886366  185907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 01:09:35.897100  185907 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 01:09:35.901804  185907 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 16 23:58 /usr/share/ca-certificates/minikubeCA.pem
	I0917 01:09:35.901885  185907 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 01:09:35.907219  185907 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 01:09:35.917680  185907 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0917 01:09:35.921928  185907 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0917 01:09:35.921979  185907 kubeadm.go:404] StartCluster: {Name:stopped-upgrade-369624 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:stopped-upgrade-369624 Name
space:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.95 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0917 01:09:35.922046  185907 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0917 01:09:35.922114  185907 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 01:09:35.968631  185907 cri.go:89] found id: ""
	I0917 01:09:35.968714  185907 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 01:09:35.981807  185907 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 01:09:35.995095  185907 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 01:09:36.004569  185907 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 01:09:36.004610  185907 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0917 01:09:36.260463  185907 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	
	
	==> CRI-O <==
	Sep 17 01:09:40 pause-003341 crio[3387]: time="2025-09-17 01:09:40.992090875Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758071380992043907,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c695e19a-7771-4f46-ad5c-eb95f8268030 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 01:09:40 pause-003341 crio[3387]: time="2025-09-17 01:09:40.993112037Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cccea1c7-6b00-49f2-8773-4c46c1f62194 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 01:09:40 pause-003341 crio[3387]: time="2025-09-17 01:09:40.993446827Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cccea1c7-6b00-49f2-8773-4c46c1f62194 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 01:09:40 pause-003341 crio[3387]: time="2025-09-17 01:09:40.994103696Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:51db5cdd0370ce980a6bcc64bce3a15578324488f0f7aa3f2229b04bad55a942,PodSandboxId:bdf83652d8a1c8c16653e066bb82f130cfd92cd7dc83c0401ed5fac46f96a4c1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1758071364110984485,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9xthx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e1459c5-1696-4e03-a638-921f1e6c547c,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b63088e8180f123f7e5b2adbaaf66c2f5f9f2ecea0273096f52c46811a556f99,PodSandboxId:e055cde0b61b8fbef68da0c39cd1e454bceca7b21cefb3de2365ad3892ef6317,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1758071360511315171,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-003341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7b1508a99ce0ed02c62e794b4b7bc3d,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257
,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79a70955e4b7f57171a2ea9b731283569e648323092e7f15408d6fbca30d0385,PodSandboxId:60a80d420056fb10714ec60d9b65582ccd837d370130760a85cca15542b4cd2f,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1758071360470907558,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-003341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 527d74da00d6a6d61913ea63691d068d,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,
io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2208a1f1a4acb35cf3a0b4bc20c8fcce68d32033c91a0d3fce5bb5dfc70c1c9b,PodSandboxId:74c972177ae29bc7723333b045d32166a830cce8e9ade3e5025eec94fc71984c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1758071360493257425,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-003341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
255972a0efec9de7a8337349e9eb993,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db35cfba9ac2bd3a9b79076c9d536faff9fe56c07347c4203ee6d0811e556928,PodSandboxId:aac61577b13d6aa7fdb45bb1d59f836425006c73e771dce9c18298972c524a1c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1758071360478760761,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernete
s.pod.name: kube-apiserver-pause-003341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04d1e7d0337d35447b7033e62317a447,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5db129cfeb1cd59eb8fbb9db5803bfc99d8cd4002a319fb0931317d3ff6fc999,PodSandboxId:a2daacc4465fe0e66d882ae9c63fa060a7a612565c1d5ffa731330d297919c1c,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:175807
1356476984304,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-955n2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79aefa4e-3e77-4863-bc04-390dc327197b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce8c60df3d44f099c4f0580568abfd3263339a7ed6902dd33ff338e4a1b2aaff,PodSandboxId:74c972177ae29bc7723333b045d32166a830cce8e9ade3e
5025eec94fc71984c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1758071335307450231,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-003341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0255972a0efec9de7a8337349e9eb993,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12e1d4b09635725fe3cc3
618f2b4f504d842e8fa45c3beb86351205891a16273,PodSandboxId:60a80d420056fb10714ec60d9b65582ccd837d370130760a85cca15542b4cd2f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1758071335332100449,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-003341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 527d74da00d6a6d61913ea63691d068d,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernete
s.pod.terminationGracePeriod: 30,},},&Container{Id:333555e32dc3f31b5f185685584e3326582004dd3c02757ed5391fcbd05013a5,PodSandboxId:e055cde0b61b8fbef68da0c39cd1e454bceca7b21cefb3de2365ad3892ef6317,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1758071334986881610,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-003341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7b1508a99ce0ed02c62e794b4b7bc3d,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.ku
bernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e50dfd1f5829bf9fe520d625940dafacd9399aa820576b5ca8cd609c37b57203,PodSandboxId:bdf83652d8a1c8c16653e066bb82f130cfd92cd7dc83c0401ed5fac46f96a4c1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1758071334996787963,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9xthx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e1459c5-1696-4e03-a638-921f1e6c547c,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4aa7dd95863c1e7ffdc2589551415495541f8d5d4e89f4ee106a8a1b1072693d,PodSandboxId:aac61577b13d6aa7fdb45bb1d59f836425006c73e771dce9c18298972c524a1c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_EXITED,CreatedAt:1758071334877583521,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-003341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04d1e7d0337d35447b7033e62317a447,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\
"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12c017f30b706451dfedd28d44162a42e54b53c998b4d03a79a2e78f229bc8c3,PodSandboxId:0afad675e99000cb048506aa1762e470d0286e6ad6c00520e44d7fda411ebdb4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1758071322826330209,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-955n2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79aefa4e-3e77-4863-bc04-390dc327197b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernet
es.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cccea1c7-6b00-49f2-8773-4c46c1f62194 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 01:09:41 pause-003341 crio[3387]: time="2025-09-17 01:09:41.061030627Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=859136bf-b6ae-4139-ae24-d0370a62f33e name=/runtime.v1.RuntimeService/Version
	Sep 17 01:09:41 pause-003341 crio[3387]: time="2025-09-17 01:09:41.061147260Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=859136bf-b6ae-4139-ae24-d0370a62f33e name=/runtime.v1.RuntimeService/Version
	Sep 17 01:09:41 pause-003341 crio[3387]: time="2025-09-17 01:09:41.062994140Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2239cfc6-b06a-477b-822c-23f95995aa1b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 01:09:41 pause-003341 crio[3387]: time="2025-09-17 01:09:41.064014864Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758071381063990015,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2239cfc6-b06a-477b-822c-23f95995aa1b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 01:09:41 pause-003341 crio[3387]: time="2025-09-17 01:09:41.064998503Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=47c64958-ac08-49b6-a205-960aa1512b49 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 01:09:41 pause-003341 crio[3387]: time="2025-09-17 01:09:41.065092262Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=47c64958-ac08-49b6-a205-960aa1512b49 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 01:09:41 pause-003341 crio[3387]: time="2025-09-17 01:09:41.065475855Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:51db5cdd0370ce980a6bcc64bce3a15578324488f0f7aa3f2229b04bad55a942,PodSandboxId:bdf83652d8a1c8c16653e066bb82f130cfd92cd7dc83c0401ed5fac46f96a4c1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1758071364110984485,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9xthx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e1459c5-1696-4e03-a638-921f1e6c547c,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b63088e8180f123f7e5b2adbaaf66c2f5f9f2ecea0273096f52c46811a556f99,PodSandboxId:e055cde0b61b8fbef68da0c39cd1e454bceca7b21cefb3de2365ad3892ef6317,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1758071360511315171,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-003341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7b1508a99ce0ed02c62e794b4b7bc3d,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257
,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79a70955e4b7f57171a2ea9b731283569e648323092e7f15408d6fbca30d0385,PodSandboxId:60a80d420056fb10714ec60d9b65582ccd837d370130760a85cca15542b4cd2f,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1758071360470907558,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-003341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 527d74da00d6a6d61913ea63691d068d,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,
io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2208a1f1a4acb35cf3a0b4bc20c8fcce68d32033c91a0d3fce5bb5dfc70c1c9b,PodSandboxId:74c972177ae29bc7723333b045d32166a830cce8e9ade3e5025eec94fc71984c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1758071360493257425,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-003341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
255972a0efec9de7a8337349e9eb993,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db35cfba9ac2bd3a9b79076c9d536faff9fe56c07347c4203ee6d0811e556928,PodSandboxId:aac61577b13d6aa7fdb45bb1d59f836425006c73e771dce9c18298972c524a1c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1758071360478760761,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernete
s.pod.name: kube-apiserver-pause-003341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04d1e7d0337d35447b7033e62317a447,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5db129cfeb1cd59eb8fbb9db5803bfc99d8cd4002a319fb0931317d3ff6fc999,PodSandboxId:a2daacc4465fe0e66d882ae9c63fa060a7a612565c1d5ffa731330d297919c1c,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:175807
1356476984304,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-955n2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79aefa4e-3e77-4863-bc04-390dc327197b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce8c60df3d44f099c4f0580568abfd3263339a7ed6902dd33ff338e4a1b2aaff,PodSandboxId:74c972177ae29bc7723333b045d32166a830cce8e9ade3e
5025eec94fc71984c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1758071335307450231,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-003341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0255972a0efec9de7a8337349e9eb993,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12e1d4b09635725fe3cc3
618f2b4f504d842e8fa45c3beb86351205891a16273,PodSandboxId:60a80d420056fb10714ec60d9b65582ccd837d370130760a85cca15542b4cd2f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1758071335332100449,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-003341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 527d74da00d6a6d61913ea63691d068d,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernete
s.pod.terminationGracePeriod: 30,},},&Container{Id:333555e32dc3f31b5f185685584e3326582004dd3c02757ed5391fcbd05013a5,PodSandboxId:e055cde0b61b8fbef68da0c39cd1e454bceca7b21cefb3de2365ad3892ef6317,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1758071334986881610,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-003341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7b1508a99ce0ed02c62e794b4b7bc3d,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.ku
bernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e50dfd1f5829bf9fe520d625940dafacd9399aa820576b5ca8cd609c37b57203,PodSandboxId:bdf83652d8a1c8c16653e066bb82f130cfd92cd7dc83c0401ed5fac46f96a4c1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1758071334996787963,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9xthx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e1459c5-1696-4e03-a638-921f1e6c547c,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4aa7dd95863c1e7ffdc2589551415495541f8d5d4e89f4ee106a8a1b1072693d,PodSandboxId:aac61577b13d6aa7fdb45bb1d59f836425006c73e771dce9c18298972c524a1c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_EXITED,CreatedAt:1758071334877583521,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-003341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04d1e7d0337d35447b7033e62317a447,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\
"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12c017f30b706451dfedd28d44162a42e54b53c998b4d03a79a2e78f229bc8c3,PodSandboxId:0afad675e99000cb048506aa1762e470d0286e6ad6c00520e44d7fda411ebdb4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1758071322826330209,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-955n2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79aefa4e-3e77-4863-bc04-390dc327197b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernet
es.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=47c64958-ac08-49b6-a205-960aa1512b49 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 01:09:41 pause-003341 crio[3387]: time="2025-09-17 01:09:41.124023215Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ab578b66-6d80-45fc-b106-b490858cb0c4 name=/runtime.v1.RuntimeService/Version
	Sep 17 01:09:41 pause-003341 crio[3387]: time="2025-09-17 01:09:41.124158306Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ab578b66-6d80-45fc-b106-b490858cb0c4 name=/runtime.v1.RuntimeService/Version
	Sep 17 01:09:41 pause-003341 crio[3387]: time="2025-09-17 01:09:41.126155868Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=af20a673-4675-4020-a580-07d393a29077 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 01:09:41 pause-003341 crio[3387]: time="2025-09-17 01:09:41.126885277Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758071381126849478,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=af20a673-4675-4020-a580-07d393a29077 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 01:09:41 pause-003341 crio[3387]: time="2025-09-17 01:09:41.127735746Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4cc20fc1-d271-47ac-a161-22e64087e2ca name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 01:09:41 pause-003341 crio[3387]: time="2025-09-17 01:09:41.127924387Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4cc20fc1-d271-47ac-a161-22e64087e2ca name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 01:09:41 pause-003341 crio[3387]: time="2025-09-17 01:09:41.128307088Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:51db5cdd0370ce980a6bcc64bce3a15578324488f0f7aa3f2229b04bad55a942,PodSandboxId:bdf83652d8a1c8c16653e066bb82f130cfd92cd7dc83c0401ed5fac46f96a4c1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1758071364110984485,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9xthx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e1459c5-1696-4e03-a638-921f1e6c547c,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b63088e8180f123f7e5b2adbaaf66c2f5f9f2ecea0273096f52c46811a556f99,PodSandboxId:e055cde0b61b8fbef68da0c39cd1e454bceca7b21cefb3de2365ad3892ef6317,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1758071360511315171,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-003341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7b1508a99ce0ed02c62e794b4b7bc3d,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257
,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79a70955e4b7f57171a2ea9b731283569e648323092e7f15408d6fbca30d0385,PodSandboxId:60a80d420056fb10714ec60d9b65582ccd837d370130760a85cca15542b4cd2f,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1758071360470907558,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-003341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 527d74da00d6a6d61913ea63691d068d,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,
io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2208a1f1a4acb35cf3a0b4bc20c8fcce68d32033c91a0d3fce5bb5dfc70c1c9b,PodSandboxId:74c972177ae29bc7723333b045d32166a830cce8e9ade3e5025eec94fc71984c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1758071360493257425,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-003341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
255972a0efec9de7a8337349e9eb993,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db35cfba9ac2bd3a9b79076c9d536faff9fe56c07347c4203ee6d0811e556928,PodSandboxId:aac61577b13d6aa7fdb45bb1d59f836425006c73e771dce9c18298972c524a1c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1758071360478760761,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernete
s.pod.name: kube-apiserver-pause-003341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04d1e7d0337d35447b7033e62317a447,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5db129cfeb1cd59eb8fbb9db5803bfc99d8cd4002a319fb0931317d3ff6fc999,PodSandboxId:a2daacc4465fe0e66d882ae9c63fa060a7a612565c1d5ffa731330d297919c1c,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:175807
1356476984304,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-955n2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79aefa4e-3e77-4863-bc04-390dc327197b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce8c60df3d44f099c4f0580568abfd3263339a7ed6902dd33ff338e4a1b2aaff,PodSandboxId:74c972177ae29bc7723333b045d32166a830cce8e9ade3e
5025eec94fc71984c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1758071335307450231,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-003341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0255972a0efec9de7a8337349e9eb993,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12e1d4b09635725fe3cc3
618f2b4f504d842e8fa45c3beb86351205891a16273,PodSandboxId:60a80d420056fb10714ec60d9b65582ccd837d370130760a85cca15542b4cd2f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1758071335332100449,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-003341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 527d74da00d6a6d61913ea63691d068d,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernete
s.pod.terminationGracePeriod: 30,},},&Container{Id:333555e32dc3f31b5f185685584e3326582004dd3c02757ed5391fcbd05013a5,PodSandboxId:e055cde0b61b8fbef68da0c39cd1e454bceca7b21cefb3de2365ad3892ef6317,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1758071334986881610,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-003341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7b1508a99ce0ed02c62e794b4b7bc3d,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.ku
bernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e50dfd1f5829bf9fe520d625940dafacd9399aa820576b5ca8cd609c37b57203,PodSandboxId:bdf83652d8a1c8c16653e066bb82f130cfd92cd7dc83c0401ed5fac46f96a4c1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1758071334996787963,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9xthx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e1459c5-1696-4e03-a638-921f1e6c547c,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4aa7dd95863c1e7ffdc2589551415495541f8d5d4e89f4ee106a8a1b1072693d,PodSandboxId:aac61577b13d6aa7fdb45bb1d59f836425006c73e771dce9c18298972c524a1c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_EXITED,CreatedAt:1758071334877583521,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-003341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04d1e7d0337d35447b7033e62317a447,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\
"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12c017f30b706451dfedd28d44162a42e54b53c998b4d03a79a2e78f229bc8c3,PodSandboxId:0afad675e99000cb048506aa1762e470d0286e6ad6c00520e44d7fda411ebdb4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1758071322826330209,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-955n2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79aefa4e-3e77-4863-bc04-390dc327197b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernet
es.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4cc20fc1-d271-47ac-a161-22e64087e2ca name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 01:09:41 pause-003341 crio[3387]: time="2025-09-17 01:09:41.188652740Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4ac79b72-ea93-4caa-ae8a-c9a2f081363f name=/runtime.v1.RuntimeService/Version
	Sep 17 01:09:41 pause-003341 crio[3387]: time="2025-09-17 01:09:41.188740166Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4ac79b72-ea93-4caa-ae8a-c9a2f081363f name=/runtime.v1.RuntimeService/Version
	Sep 17 01:09:41 pause-003341 crio[3387]: time="2025-09-17 01:09:41.191078322Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d03e093f-3ce4-4959-b964-3d59e0524dbd name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 01:09:41 pause-003341 crio[3387]: time="2025-09-17 01:09:41.191751755Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758071381191728862,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d03e093f-3ce4-4959-b964-3d59e0524dbd name=/runtime.v1.ImageService/ImageFsInfo
	Sep 17 01:09:41 pause-003341 crio[3387]: time="2025-09-17 01:09:41.192906856Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c68272a6-d6ad-4f09-835f-722e18a0d2e1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 01:09:41 pause-003341 crio[3387]: time="2025-09-17 01:09:41.193120812Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c68272a6-d6ad-4f09-835f-722e18a0d2e1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 17 01:09:41 pause-003341 crio[3387]: time="2025-09-17 01:09:41.195283912Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:51db5cdd0370ce980a6bcc64bce3a15578324488f0f7aa3f2229b04bad55a942,PodSandboxId:bdf83652d8a1c8c16653e066bb82f130cfd92cd7dc83c0401ed5fac46f96a4c1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1758071364110984485,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9xthx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e1459c5-1696-4e03-a638-921f1e6c547c,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b63088e8180f123f7e5b2adbaaf66c2f5f9f2ecea0273096f52c46811a556f99,PodSandboxId:e055cde0b61b8fbef68da0c39cd1e454bceca7b21cefb3de2365ad3892ef6317,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1758071360511315171,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-003341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7b1508a99ce0ed02c62e794b4b7bc3d,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257
,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79a70955e4b7f57171a2ea9b731283569e648323092e7f15408d6fbca30d0385,PodSandboxId:60a80d420056fb10714ec60d9b65582ccd837d370130760a85cca15542b4cd2f,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1758071360470907558,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-003341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 527d74da00d6a6d61913ea63691d068d,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,
io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2208a1f1a4acb35cf3a0b4bc20c8fcce68d32033c91a0d3fce5bb5dfc70c1c9b,PodSandboxId:74c972177ae29bc7723333b045d32166a830cce8e9ade3e5025eec94fc71984c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1758071360493257425,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-003341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0
255972a0efec9de7a8337349e9eb993,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db35cfba9ac2bd3a9b79076c9d536faff9fe56c07347c4203ee6d0811e556928,PodSandboxId:aac61577b13d6aa7fdb45bb1d59f836425006c73e771dce9c18298972c524a1c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1758071360478760761,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernete
s.pod.name: kube-apiserver-pause-003341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04d1e7d0337d35447b7033e62317a447,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5db129cfeb1cd59eb8fbb9db5803bfc99d8cd4002a319fb0931317d3ff6fc999,PodSandboxId:a2daacc4465fe0e66d882ae9c63fa060a7a612565c1d5ffa731330d297919c1c,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:175807
1356476984304,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-955n2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79aefa4e-3e77-4863-bc04-390dc327197b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce8c60df3d44f099c4f0580568abfd3263339a7ed6902dd33ff338e4a1b2aaff,PodSandboxId:74c972177ae29bc7723333b045d32166a830cce8e9ade3e
5025eec94fc71984c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1758071335307450231,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-003341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0255972a0efec9de7a8337349e9eb993,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12e1d4b09635725fe3cc3
618f2b4f504d842e8fa45c3beb86351205891a16273,PodSandboxId:60a80d420056fb10714ec60d9b65582ccd837d370130760a85cca15542b4cd2f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1758071335332100449,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-003341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 527d74da00d6a6d61913ea63691d068d,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernete
s.pod.terminationGracePeriod: 30,},},&Container{Id:333555e32dc3f31b5f185685584e3326582004dd3c02757ed5391fcbd05013a5,PodSandboxId:e055cde0b61b8fbef68da0c39cd1e454bceca7b21cefb3de2365ad3892ef6317,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1758071334986881610,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-003341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7b1508a99ce0ed02c62e794b4b7bc3d,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.ku
bernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e50dfd1f5829bf9fe520d625940dafacd9399aa820576b5ca8cd609c37b57203,PodSandboxId:bdf83652d8a1c8c16653e066bb82f130cfd92cd7dc83c0401ed5fac46f96a4c1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1758071334996787963,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9xthx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e1459c5-1696-4e03-a638-921f1e6c547c,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4aa7dd95863c1e7ffdc2589551415495541f8d5d4e89f4ee106a8a1b1072693d,PodSandboxId:aac61577b13d6aa7fdb45bb1d59f836425006c73e771dce9c18298972c524a1c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_EXITED,CreatedAt:1758071334877583521,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-003341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04d1e7d0337d35447b7033e62317a447,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\
"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12c017f30b706451dfedd28d44162a42e54b53c998b4d03a79a2e78f229bc8c3,PodSandboxId:0afad675e99000cb048506aa1762e470d0286e6ad6c00520e44d7fda411ebdb4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1758071322826330209,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-955n2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79aefa4e-3e77-4863-bc04-390dc327197b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernet
es.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c68272a6-d6ad-4f09-835f-722e18a0d2e1 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	51db5cdd0370c       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce   17 seconds ago      Running             kube-proxy                3                   bdf83652d8a1c       kube-proxy-9xthx
	b63088e8180f1       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634   20 seconds ago      Running             kube-controller-manager   3                   e055cde0b61b8       kube-controller-manager-pause-003341
	2208a1f1a4acb       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc   20 seconds ago      Running             kube-scheduler            3                   74c972177ae29       kube-scheduler-pause-003341
	db35cfba9ac2b       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90   20 seconds ago      Running             kube-apiserver            3                   aac61577b13d6       kube-apiserver-pause-003341
	79a70955e4b7f       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   20 seconds ago      Running             etcd                      3                   60a80d420056f       etcd-pause-003341
	5db129cfeb1cd       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   24 seconds ago      Running             coredns                   2                   a2daacc4465fe       coredns-66bc5c9577-955n2
	12e1d4b096357       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   45 seconds ago      Exited              etcd                      2                   60a80d420056f       etcd-pause-003341
	ce8c60df3d44f       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc   45 seconds ago      Exited              kube-scheduler            2                   74c972177ae29       kube-scheduler-pause-003341
	e50dfd1f5829b       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce   46 seconds ago      Exited              kube-proxy                2                   bdf83652d8a1c       kube-proxy-9xthx
	333555e32dc3f       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634   46 seconds ago      Exited              kube-controller-manager   2                   e055cde0b61b8       kube-controller-manager-pause-003341
	4aa7dd95863c1       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90   46 seconds ago      Exited              kube-apiserver            2                   aac61577b13d6       kube-apiserver-pause-003341
	12c017f30b706       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   58 seconds ago      Exited              coredns                   1                   0afad675e9900       coredns-66bc5c9577-955n2
	
	
	==> coredns [12c017f30b706451dfedd28d44162a42e54b53c998b4d03a79a2e78f229bc8c3] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ecad3ac8c72227dcf0d7a418ea5051ee155dd74d241a13c4787cc61906568517b5647c8519c78ef2c6b724422ee4b03d6cfb27e9a87140163726e83184faf782
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:46975 - 23717 "HINFO IN 4183112172293320515.7005939120852097983. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.117006672s
	
	
	==> coredns [5db129cfeb1cd59eb8fbb9db5803bfc99d8cd4002a319fb0931317d3ff6fc999] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:37792->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:37766->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:37782->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ecad3ac8c72227dcf0d7a418ea5051ee155dd74d241a13c4787cc61906568517b5647c8519c78ef2c6b724422ee4b03d6cfb27e9a87140163726e83184faf782
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54782 - 38568 "HINFO IN 8916208798569525581.7855192822637624078. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.082294771s
	
	
	==> describe nodes <==
	Name:               pause-003341
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-003341
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=pause-003341
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_17T01_07_23_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Sep 2025 01:07:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-003341
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 01:09:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 01:09:23 +0000   Wed, 17 Sep 2025 01:07:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 01:09:23 +0000   Wed, 17 Sep 2025 01:07:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 01:09:23 +0000   Wed, 17 Sep 2025 01:07:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 01:09:23 +0000   Wed, 17 Sep 2025 01:07:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.83.157
	  Hostname:    pause-003341
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 3cce372e040d49ad910673a91e6bcbb4
	  System UUID:                3cce372e-040d-49ad-9106-73a91e6bcbb4
	  Boot ID:                    f1b2dda5-6d4d-45c6-80f9-b55d0e3d3477
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-955n2                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     2m13s
	  kube-system                 etcd-pause-003341                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         2m19s
	  kube-system                 kube-apiserver-pause-003341             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m19s
	  kube-system                 kube-controller-manager-pause-003341    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m19s
	  kube-system                 kube-proxy-9xthx                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m13s
	  kube-system                 kube-scheduler-pause-003341             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 17s                    kube-proxy       
	  Normal  Starting                 2m11s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  2m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     2m26s (x7 over 2m27s)  kubelet          Node pause-003341 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m26s (x8 over 2m27s)  kubelet          Node pause-003341 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m26s (x8 over 2m27s)  kubelet          Node pause-003341 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m19s                  kubelet          Node pause-003341 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m19s                  kubelet          Node pause-003341 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m19s                  kubelet          Node pause-003341 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m19s                  kubelet          Starting kubelet.
	  Normal  NodeReady                2m18s                  kubelet          Node pause-003341 status is now: NodeReady
	  Normal  RegisteredNode           2m14s                  node-controller  Node pause-003341 event: Registered Node pause-003341 in Controller
	  Normal  Starting                 22s                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  22s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21s (x8 over 22s)      kubelet          Node pause-003341 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s (x8 over 22s)      kubelet          Node pause-003341 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s (x7 over 22s)      kubelet          Node pause-003341 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14s                    node-controller  Node pause-003341 event: Registered Node pause-003341 in Controller
	
	
	==> dmesg <==
	[Sep17 01:06] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000008] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000059] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.006706] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.488246] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000019] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep17 01:07] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.117452] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.166116] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.165948] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.028750] kauditd_printk_skb: 18 callbacks suppressed
	[Sep17 01:08] kauditd_printk_skb: 258 callbacks suppressed
	[  +9.687866] kauditd_printk_skb: 275 callbacks suppressed
	[Sep17 01:09] kauditd_printk_skb: 245 callbacks suppressed
	[  +4.696309] kauditd_printk_skb: 99 callbacks suppressed
	
	
	==> etcd [12e1d4b09635725fe3cc3618f2b4f504d842e8fa45c3beb86351205891a16273] <==
	{"level":"info","ts":"2025-09-17T01:08:57.035452Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-09-17T01:08:57.093136Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.83.157:2379"}
	{"level":"info","ts":"2025-09-17T01:08:57.093622Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-09-17T01:08:57.096039Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-17T01:08:57.096581Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-003341","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.83.157:2380"],"advertise-client-urls":["https://192.168.83.157:2379"]}
	{"level":"warn","ts":"2025-09-17T01:08:57.096741Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41646","server-name":"","error":"read tcp 127.0.0.1:2379->127.0.0.1:41646: use of closed network connection"}
	2025/09/17 01:08:57 WARNING: [core] [Channel #4 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: EOF"
	2025/09/17 01:08:57 WARNING: [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: EOF"
	{"level":"warn","ts":"2025-09-17T01:08:57.101303Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41656","server-name":"","error":"write tcp 127.0.0.1:2379->127.0.0.1:41656: use of closed network connection"}
	{"level":"error","ts":"2025-09-17T01:08:57.105907Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-17T01:08:57.106026Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-17T01:08:57.106124Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-17T01:08:57.106150Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"734c93290a047874","current-leader-member-id":"734c93290a047874"}
	{"level":"info","ts":"2025-09-17T01:08:57.106268Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-09-17T01:08:57.106292Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-09-17T01:08:57.107956Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-17T01:08:57.115146Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-17T01:08:57.115263Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-09-17T01:08:57.115469Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.83.157:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-17T01:08:57.115525Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.83.157:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-17T01:08:57.115856Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.83.157:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-17T01:08:57.119128Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.83.157:2380"}
	{"level":"error","ts":"2025-09-17T01:08:57.119230Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.83.157:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-17T01:08:57.119286Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.83.157:2380"}
	{"level":"info","ts":"2025-09-17T01:08:57.119315Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-003341","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.83.157:2380"],"advertise-client-urls":["https://192.168.83.157:2379"]}
	
	
	==> etcd [79a70955e4b7f57171a2ea9b731283569e648323092e7f15408d6fbca30d0385] <==
	{"level":"warn","ts":"2025-09-17T01:09:22.240289Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T01:09:22.264080Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T01:09:22.275919Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T01:09:22.284732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T01:09:22.301313Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T01:09:22.323853Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T01:09:22.354375Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T01:09:22.391778Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T01:09:22.404093Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T01:09:22.426470Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T01:09:22.449295Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T01:09:22.467164Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T01:09:22.486196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T01:09:22.523317Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T01:09:22.560002Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T01:09:22.567644Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T01:09:22.583707Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T01:09:22.600045Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T01:09:22.617589Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T01:09:22.639993Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T01:09:22.664994Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T01:09:22.700261Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T01:09:22.794866Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T01:09:35.070733Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"123.625699ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8679730973141494354 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.83.157\" mod_revision:463 > success:<request_put:<key:\"/registry/masterleases/192.168.83.157\" value_size:67 lease:8679730973141494352 >> failure:<request_range:<key:\"/registry/masterleases/192.168.83.157\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-09-17T01:09:35.071870Z","caller":"traceutil/trace.go:172","msg":"trace[1390127224] transaction","detail":"{read_only:false; response_revision:521; number_of_response:1; }","duration":"132.661102ms","start":"2025-09-17T01:09:34.938494Z","end":"2025-09-17T01:09:35.071155Z","steps":["trace[1390127224] 'compare'  (duration: 123.469165ms)"],"step_count":1}
	
	
	==> kernel <==
	 01:09:41 up 2 min,  0 users,  load average: 1.40, 0.60, 0.23
	Linux pause-003341 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Sep  9 02:24:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [4aa7dd95863c1e7ffdc2589551415495541f8d5d4e89f4ee106a8a1b1072693d] <==
	W0917 01:08:57.583918       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 01:08:57.584006       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0917 01:08:57.585502       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I0917 01:08:57.619890       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I0917 01:08:57.621032       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I0917 01:08:57.621145       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0917 01:08:57.621365       1 instance.go:239] Using reconciler: lease
	W0917 01:08:57.622504       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 01:08:57.622999       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 01:08:58.585206       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 01:08:58.585206       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 01:08:58.623905       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 01:09:00.287892       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 01:09:00.355658       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 01:09:00.505330       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 01:09:02.372949       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 01:09:02.846260       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 01:09:03.033906       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 01:09:06.617984       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 01:09:07.171472       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 01:09:07.690247       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 01:09:13.233994       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 01:09:13.419292       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0917 01:09:14.626379       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F0917 01:09:17.622611       1 instance.go:232] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [db35cfba9ac2bd3a9b79076c9d536faff9fe56c07347c4203ee6d0811e556928] <==
	I0917 01:09:23.650027       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0917 01:09:23.659267       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I0917 01:09:23.661673       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0917 01:09:23.664027       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I0917 01:09:23.664086       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I0917 01:09:23.676866       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	E0917 01:09:23.697065       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0917 01:09:23.708714       1 cache.go:39] Caches are synced for autoregister controller
	I0917 01:09:23.710892       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I0917 01:09:23.721409       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0917 01:09:23.721452       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0917 01:09:23.721575       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0917 01:09:23.721684       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0917 01:09:23.721740       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0917 01:09:23.728103       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I0917 01:09:23.730424       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0917 01:09:23.856416       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0917 01:09:23.856465       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0917 01:09:24.434457       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0917 01:09:25.014175       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0917 01:09:25.052480       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0917 01:09:25.085134       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0917 01:09:25.093449       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0917 01:09:27.119479       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0917 01:09:27.299543       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [333555e32dc3f31b5f185685584e3326582004dd3c02757ed5391fcbd05013a5] <==
	I0917 01:08:57.316973       1 serving.go:386] Generated self-signed cert in-memory
	I0917 01:08:57.912934       1 controllermanager.go:191] "Starting" version="v1.34.0"
	I0917 01:08:57.912975       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 01:08:57.917970       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I0917 01:08:57.918687       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0917 01:08:57.919158       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0917 01:08:57.919315       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	
	
	==> kube-controller-manager [b63088e8180f123f7e5b2adbaaf66c2f5f9f2ecea0273096f52c46811a556f99] <==
	I0917 01:09:27.144227       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0917 01:09:27.144923       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0917 01:09:27.144986       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0917 01:09:27.145059       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0917 01:09:27.145093       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0917 01:09:27.145128       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0917 01:09:27.145317       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0917 01:09:27.146589       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0917 01:09:27.147134       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0917 01:09:27.147626       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0917 01:09:27.147716       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0917 01:09:27.149239       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0917 01:09:27.150488       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0917 01:09:27.150647       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0917 01:09:27.150835       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-003341"
	I0917 01:09:27.150927       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0917 01:09:27.153766       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0917 01:09:27.154731       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0917 01:09:27.154766       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0917 01:09:27.154774       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0917 01:09:27.157025       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0917 01:09:27.157781       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0917 01:09:27.160358       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0917 01:09:27.164186       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0917 01:09:27.171721       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	
	
	==> kube-proxy [51db5cdd0370ce980a6bcc64bce3a15578324488f0f7aa3f2229b04bad55a942] <==
	I0917 01:09:24.286094       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0917 01:09:24.386984       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0917 01:09:24.387042       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.83.157"]
	E0917 01:09:24.387136       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 01:09:24.429582       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0917 01:09:24.429650       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0917 01:09:24.429677       1 server_linux.go:132] "Using iptables Proxier"
	I0917 01:09:24.447920       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 01:09:24.448477       1 server.go:527] "Version info" version="v1.34.0"
	I0917 01:09:24.448646       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 01:09:24.456516       1 config.go:200] "Starting service config controller"
	I0917 01:09:24.460011       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0917 01:09:24.458910       1 config.go:106] "Starting endpoint slice config controller"
	I0917 01:09:24.460114       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0917 01:09:24.458934       1 config.go:403] "Starting serviceCIDR config controller"
	I0917 01:09:24.460167       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0917 01:09:24.460904       1 config.go:309] "Starting node config controller"
	I0917 01:09:24.460933       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0917 01:09:24.460939       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0917 01:09:24.561126       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0917 01:09:24.561353       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0917 01:09:24.561368       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [e50dfd1f5829bf9fe520d625940dafacd9399aa820576b5ca8cd609c37b57203] <==
	I0917 01:08:56.716399       1 server_linux.go:53] "Using iptables proxy"
	I0917 01:08:57.114038       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E0917 01:09:07.115913       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dpause-003341&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	
	
	==> kube-scheduler [2208a1f1a4acb35cf3a0b4bc20c8fcce68d32033c91a0d3fce5bb5dfc70c1c9b] <==
	I0917 01:09:21.312048       1 serving.go:386] Generated self-signed cert in-memory
	W0917 01:09:23.524000       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0917 01:09:23.524086       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0917 01:09:23.524110       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0917 01:09:23.524126       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0917 01:09:23.652739       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0917 01:09:23.656730       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 01:09:23.660637       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0917 01:09:23.660691       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 01:09:23.662996       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 01:09:23.660717       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0917 01:09:23.763418       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [ce8c60df3d44f099c4f0580568abfd3263339a7ed6902dd33ff338e4a1b2aaff] <==
	I0917 01:08:57.662078       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kubelet <==
	Sep 17 01:09:23 pause-003341 kubelet[4588]: E0917 01:09:23.236531    4588 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-003341\" not found" node="pause-003341"
	Sep 17 01:09:23 pause-003341 kubelet[4588]: E0917 01:09:23.236989    4588 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-003341\" not found" node="pause-003341"
	Sep 17 01:09:23 pause-003341 kubelet[4588]: E0917 01:09:23.237311    4588 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-003341\" not found" node="pause-003341"
	Sep 17 01:09:23 pause-003341 kubelet[4588]: E0917 01:09:23.239139    4588 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-003341\" not found" node="pause-003341"
	Sep 17 01:09:23 pause-003341 kubelet[4588]: I0917 01:09:23.494650    4588 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-003341"
	Sep 17 01:09:23 pause-003341 kubelet[4588]: I0917 01:09:23.709768    4588 kubelet_node_status.go:124] "Node was previously registered" node="pause-003341"
	Sep 17 01:09:23 pause-003341 kubelet[4588]: I0917 01:09:23.709919    4588 kubelet_node_status.go:78] "Successfully registered node" node="pause-003341"
	Sep 17 01:09:23 pause-003341 kubelet[4588]: I0917 01:09:23.709959    4588 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 17 01:09:23 pause-003341 kubelet[4588]: I0917 01:09:23.711482    4588 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 17 01:09:23 pause-003341 kubelet[4588]: E0917 01:09:23.751564    4588 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-pause-003341\" already exists" pod="kube-system/etcd-pause-003341"
	Sep 17 01:09:23 pause-003341 kubelet[4588]: I0917 01:09:23.751631    4588 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-003341"
	Sep 17 01:09:23 pause-003341 kubelet[4588]: E0917 01:09:23.762491    4588 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-003341\" already exists" pod="kube-system/kube-apiserver-pause-003341"
	Sep 17 01:09:23 pause-003341 kubelet[4588]: I0917 01:09:23.762517    4588 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-003341"
	Sep 17 01:09:23 pause-003341 kubelet[4588]: E0917 01:09:23.773727    4588 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-003341\" already exists" pod="kube-system/kube-controller-manager-pause-003341"
	Sep 17 01:09:23 pause-003341 kubelet[4588]: I0917 01:09:23.773754    4588 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-003341"
	Sep 17 01:09:23 pause-003341 kubelet[4588]: E0917 01:09:23.784114    4588 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-003341\" already exists" pod="kube-system/kube-scheduler-pause-003341"
	Sep 17 01:09:23 pause-003341 kubelet[4588]: I0917 01:09:23.786181    4588 apiserver.go:52] "Watching apiserver"
	Sep 17 01:09:23 pause-003341 kubelet[4588]: I0917 01:09:23.797896    4588 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 17 01:09:23 pause-003341 kubelet[4588]: I0917 01:09:23.836030    4588 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9e1459c5-1696-4e03-a638-921f1e6c547c-xtables-lock\") pod \"kube-proxy-9xthx\" (UID: \"9e1459c5-1696-4e03-a638-921f1e6c547c\") " pod="kube-system/kube-proxy-9xthx"
	Sep 17 01:09:23 pause-003341 kubelet[4588]: I0917 01:09:23.836178    4588 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9e1459c5-1696-4e03-a638-921f1e6c547c-lib-modules\") pod \"kube-proxy-9xthx\" (UID: \"9e1459c5-1696-4e03-a638-921f1e6c547c\") " pod="kube-system/kube-proxy-9xthx"
	Sep 17 01:09:24 pause-003341 kubelet[4588]: I0917 01:09:24.092091    4588 scope.go:117] "RemoveContainer" containerID="e50dfd1f5829bf9fe520d625940dafacd9399aa820576b5ca8cd609c37b57203"
	Sep 17 01:09:29 pause-003341 kubelet[4588]: E0917 01:09:29.946898    4588 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758071369946217054  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Sep 17 01:09:29 pause-003341 kubelet[4588]: E0917 01:09:29.946953    4588 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758071369946217054  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Sep 17 01:09:39 pause-003341 kubelet[4588]: E0917 01:09:39.949619    4588 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758071379948700643  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Sep 17 01:09:39 pause-003341 kubelet[4588]: E0917 01:09:39.950119    4588 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758071379948700643  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-003341 -n pause-003341
helpers_test.go:269: (dbg) Run:  kubectl --context pause-003341 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (99.34s)

                                                
                                    

Test pass (275/324)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 7.41
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.06
9 TestDownloadOnly/v1.28.0/DeleteAll 0.15
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.0/json-events 6.32
13 TestDownloadOnly/v1.34.0/preload-exists 0
17 TestDownloadOnly/v1.34.0/LogsDuration 0.06
18 TestDownloadOnly/v1.34.0/DeleteAll 0.15
19 TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.64
22 TestOffline 111.28
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 208.98
31 TestAddons/serial/GCPAuth/Namespaces 0.16
32 TestAddons/serial/GCPAuth/FakeCredentials 10.65
35 TestAddons/parallel/Registry 15.45
36 TestAddons/parallel/RegistryCreds 0.86
38 TestAddons/parallel/InspektorGadget 6.32
39 TestAddons/parallel/MetricsServer 6.96
41 TestAddons/parallel/CSI 51.24
42 TestAddons/parallel/Headlamp 28.47
43 TestAddons/parallel/CloudSpanner 5.93
44 TestAddons/parallel/LocalPath 54.62
45 TestAddons/parallel/NvidiaDevicePlugin 6.87
46 TestAddons/parallel/Yakd 12.64
48 TestAddons/StoppedEnableDisable 87.08
49 TestCertOptions 52.04
50 TestCertExpiration 315.65
52 TestForceSystemdFlag 72.71
53 TestForceSystemdEnv 45.6
55 TestKVMDriverInstallOrUpdate 3.22
59 TestErrorSpam/setup 40.4
60 TestErrorSpam/start 0.39
61 TestErrorSpam/status 0.88
62 TestErrorSpam/pause 1.96
63 TestErrorSpam/unpause 2.46
64 TestErrorSpam/stop 5.47
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 83.46
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 54.61
71 TestFunctional/serial/KubeContext 0.05
72 TestFunctional/serial/KubectlGetPods 0.08
75 TestFunctional/serial/CacheCmd/cache/add_remote 4.39
76 TestFunctional/serial/CacheCmd/cache/add_local 1.56
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.74
81 TestFunctional/serial/CacheCmd/cache/delete 0.1
82 TestFunctional/serial/MinikubeKubectlCmd 0.11
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
84 TestFunctional/serial/ExtraConfig 31.59
85 TestFunctional/serial/ComponentHealth 0.07
86 TestFunctional/serial/LogsCmd 1.57
87 TestFunctional/serial/LogsFileCmd 1.57
88 TestFunctional/serial/InvalidService 4.56
90 TestFunctional/parallel/ConfigCmd 0.35
92 TestFunctional/parallel/DryRun 0.27
93 TestFunctional/parallel/InternationalLanguage 0.14
94 TestFunctional/parallel/StatusCmd 0.78
98 TestFunctional/parallel/ServiceCmdConnect 19.55
99 TestFunctional/parallel/AddonsCmd 0.16
102 TestFunctional/parallel/SSHCmd 0.41
103 TestFunctional/parallel/CpCmd 1.42
104 TestFunctional/parallel/MySQL 23.83
105 TestFunctional/parallel/FileSync 0.24
106 TestFunctional/parallel/CertSync 1.41
110 TestFunctional/parallel/NodeLabels 0.08
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.46
114 TestFunctional/parallel/License 0.45
115 TestFunctional/parallel/Version/short 0.05
116 TestFunctional/parallel/Version/components 0.49
117 TestFunctional/parallel/ImageCommands/ImageListShort 0.21
118 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
119 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
120 TestFunctional/parallel/ImageCommands/ImageListYaml 0.21
121 TestFunctional/parallel/ImageCommands/ImageBuild 2.45
122 TestFunctional/parallel/ImageCommands/Setup 0.99
123 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
124 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
125 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
126 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.96
127 TestFunctional/parallel/ProfileCmd/profile_not_create 0.45
137 TestFunctional/parallel/ProfileCmd/profile_list 0.47
138 TestFunctional/parallel/ProfileCmd/profile_json_output 0.46
139 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.47
140 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.49
141 TestFunctional/parallel/ImageCommands/ImageSaveToFile 7.47
142 TestFunctional/parallel/ImageCommands/ImageRemove 0.66
143 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.11
144 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.58
146 TestFunctional/parallel/MountCmd/any-port 57.51
147 TestFunctional/parallel/MountCmd/specific-port 1.81
148 TestFunctional/parallel/MountCmd/VerifyCleanup 1.24
149 TestFunctional/parallel/ServiceCmd/List 1.25
150 TestFunctional/parallel/ServiceCmd/JSONOutput 1.24
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
161 TestMultiControlPlane/serial/StartCluster 243.78
162 TestMultiControlPlane/serial/DeployApp 5.44
163 TestMultiControlPlane/serial/PingHostFromPods 1.27
164 TestMultiControlPlane/serial/AddWorkerNode 47.83
165 TestMultiControlPlane/serial/NodeLabels 0.08
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.93
167 TestMultiControlPlane/serial/CopyFile 13.55
168 TestMultiControlPlane/serial/StopSecondaryNode 87.04
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.7
170 TestMultiControlPlane/serial/RestartSecondaryNode 36.92
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.95
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 377.48
173 TestMultiControlPlane/serial/DeleteSecondaryNode 18.66
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.66
175 TestMultiControlPlane/serial/StopCluster 260.69
176 TestMultiControlPlane/serial/RestartCluster 98.72
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.67
178 TestMultiControlPlane/serial/AddSecondaryNode 81.51
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.97
183 TestJSONOutput/start/Command 77.89
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.8
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.72
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 8
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.22
211 TestMainNoArgs 0.05
212 TestMinikubeProfile 86.79
215 TestMountStart/serial/StartWithMountFirst 21.03
216 TestMountStart/serial/VerifyMountFirst 0.41
217 TestMountStart/serial/StartWithMountSecond 23.23
218 TestMountStart/serial/VerifyMountSecond 0.38
219 TestMountStart/serial/DeleteFirst 0.73
220 TestMountStart/serial/VerifyMountPostDelete 0.39
221 TestMountStart/serial/Stop 1.32
222 TestMountStart/serial/RestartStopped 20.08
223 TestMountStart/serial/VerifyMountPostStop 0.39
226 TestMultiNode/serial/FreshStart2Nodes 130.94
227 TestMultiNode/serial/DeployApp2Nodes 3.96
228 TestMultiNode/serial/PingHostFrom2Pods 0.81
229 TestMultiNode/serial/AddNode 45.64
230 TestMultiNode/serial/MultiNodeLabels 0.06
231 TestMultiNode/serial/ProfileList 0.62
232 TestMultiNode/serial/CopyFile 7.58
233 TestMultiNode/serial/StopNode 2.52
234 TestMultiNode/serial/StartAfterStop 40.33
235 TestMultiNode/serial/RestartKeepsNodes 298.8
236 TestMultiNode/serial/DeleteNode 2.75
237 TestMultiNode/serial/StopMultiNode 174.96
238 TestMultiNode/serial/RestartMultiNode 92.63
239 TestMultiNode/serial/ValidateNameConflict 43.19
246 TestScheduledStopUnix 111.2
250 TestRunningBinaryUpgrade 128.87
252 TestKubernetesUpgrade 130.14
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
256 TestNoKubernetes/serial/StartWithK8s 84.24
265 TestPause/serial/Start 118.3
266 TestNoKubernetes/serial/StartWithStopK8s 30.74
267 TestNoKubernetes/serial/Start 41.97
268 TestNoKubernetes/serial/VerifyK8sNotRunning 0.23
269 TestNoKubernetes/serial/ProfileList 1.63
270 TestNoKubernetes/serial/Stop 1.45
271 TestNoKubernetes/serial/StartNoArgs 36.11
273 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.22
281 TestNetworkPlugins/group/false 4.01
285 TestStoppedBinaryUpgrade/Setup 0.55
286 TestStoppedBinaryUpgrade/Upgrade 121.01
288 TestStartStop/group/old-k8s-version/serial/FirstStart 61.47
289 TestStoppedBinaryUpgrade/MinikubeLogs 1.31
291 TestStartStop/group/no-preload/serial/FirstStart 113.29
293 TestStartStop/group/embed-certs/serial/FirstStart 113.96
294 TestStartStop/group/old-k8s-version/serial/DeployApp 9.34
295 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.32
296 TestStartStop/group/old-k8s-version/serial/Stop 84.33
298 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 86
299 TestStartStop/group/no-preload/serial/DeployApp 9.32
300 TestStartStop/group/embed-certs/serial/DeployApp 8.33
301 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.72
302 TestStartStop/group/no-preload/serial/Stop 79.88
303 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.05
304 TestStartStop/group/embed-certs/serial/Stop 77.86
305 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
306 TestStartStop/group/old-k8s-version/serial/SecondStart 43.77
307 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.34
308 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 17.01
309 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.18
310 TestStartStop/group/default-k8s-diff-port/serial/Stop 84.27
311 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.09
312 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
313 TestStartStop/group/embed-certs/serial/SecondStart 46.46
314 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
315 TestStartStop/group/no-preload/serial/SecondStart 76.61
316 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
317 TestStartStop/group/old-k8s-version/serial/Pause 2.97
319 TestStartStop/group/newest-cni/serial/FirstStart 81.4
320 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
321 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
322 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
323 TestStartStop/group/embed-certs/serial/Pause 3.57
324 TestNetworkPlugins/group/auto/Start 90.09
325 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
326 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 62.05
327 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 18.01
328 TestStartStop/group/newest-cni/serial/DeployApp 0
329 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.6
330 TestStartStop/group/newest-cni/serial/Stop 10.63
331 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.11
332 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.28
333 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.23
334 TestStartStop/group/no-preload/serial/Pause 3.62
335 TestStartStop/group/newest-cni/serial/SecondStart 40
336 TestNetworkPlugins/group/kindnet/Start 79.7
337 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 14.01
338 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
339 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
340 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
341 TestStartStop/group/newest-cni/serial/Pause 3.21
342 TestNetworkPlugins/group/calico/Start 94.64
343 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
344 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.38
345 TestNetworkPlugins/group/auto/KubeletFlags 0.32
346 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.84
347 TestNetworkPlugins/group/auto/NetCatPod 11.63
348 TestNetworkPlugins/group/custom-flannel/Start 92.63
349 TestNetworkPlugins/group/auto/DNS 0.17
350 TestNetworkPlugins/group/auto/Localhost 0.17
351 TestNetworkPlugins/group/auto/HairPin 0.19
352 TestNetworkPlugins/group/enable-default-cni/Start 101.47
353 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
354 TestNetworkPlugins/group/kindnet/KubeletFlags 0.23
355 TestNetworkPlugins/group/kindnet/NetCatPod 12.31
356 TestNetworkPlugins/group/kindnet/DNS 0.22
357 TestNetworkPlugins/group/kindnet/Localhost 0.14
358 TestNetworkPlugins/group/kindnet/HairPin 0.17
359 TestNetworkPlugins/group/flannel/Start 73.48
360 TestNetworkPlugins/group/calico/ControllerPod 6.01
361 TestNetworkPlugins/group/calico/KubeletFlags 0.28
362 TestNetworkPlugins/group/calico/NetCatPod 12.32
363 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.29
364 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.34
365 TestNetworkPlugins/group/calico/DNS 0.18
366 TestNetworkPlugins/group/calico/Localhost 0.18
367 TestNetworkPlugins/group/calico/HairPin 0.16
368 TestNetworkPlugins/group/custom-flannel/DNS 0.2
369 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
370 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
371 TestNetworkPlugins/group/bridge/Start 82.71
372 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.25
373 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.28
374 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
375 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
376 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
377 TestNetworkPlugins/group/flannel/ControllerPod 6.01
378 TestNetworkPlugins/group/flannel/KubeletFlags 0.24
379 TestNetworkPlugins/group/flannel/NetCatPod 12.3
380 TestNetworkPlugins/group/flannel/DNS 0.25
381 TestNetworkPlugins/group/flannel/Localhost 0.15
382 TestNetworkPlugins/group/flannel/HairPin 0.14
383 TestNetworkPlugins/group/bridge/KubeletFlags 0.21
384 TestNetworkPlugins/group/bridge/NetCatPod 10.24
385 TestNetworkPlugins/group/bridge/DNS 0.16
386 TestNetworkPlugins/group/bridge/Localhost 0.17
387 TestNetworkPlugins/group/bridge/HairPin 0.13
x
+
TestDownloadOnly/v1.28.0/json-events (7.41s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-150893 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-150893 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (7.40522296s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (7.41s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I0916 23:58:15.692910  145530 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
I0916 23:58:15.693015  145530 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21550-141589/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-150893
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-150893: exit status 85 (63.793296ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                                ARGS                                                                                                 │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-150893 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ download-only-150893 │ jenkins │ v1.37.0 │ 16 Sep 25 23:58 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/16 23:58:08
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 23:58:08.330809  145542 out.go:360] Setting OutFile to fd 1 ...
	I0916 23:58:08.331097  145542 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0916 23:58:08.331109  145542 out.go:374] Setting ErrFile to fd 2...
	I0916 23:58:08.331116  145542 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0916 23:58:08.331346  145542 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-141589/.minikube/bin
	W0916 23:58:08.331504  145542 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21550-141589/.minikube/config/config.json: open /home/jenkins/minikube-integration/21550-141589/.minikube/config/config.json: no such file or directory
	I0916 23:58:08.332060  145542 out.go:368] Setting JSON to true
	I0916 23:58:08.333030  145542 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":9632,"bootTime":1758057456,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 23:58:08.333126  145542 start.go:140] virtualization: kvm guest
	I0916 23:58:08.335375  145542 out.go:99] [download-only-150893] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W0916 23:58:08.335523  145542 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/21550-141589/.minikube/cache/preloaded-tarball: no such file or directory
	I0916 23:58:08.335587  145542 notify.go:220] Checking for updates...
	I0916 23:58:08.336719  145542 out.go:171] MINIKUBE_LOCATION=21550
	I0916 23:58:08.338188  145542 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 23:58:08.339601  145542 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21550-141589/kubeconfig
	I0916 23:58:08.341038  145542 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-141589/.minikube
	I0916 23:58:08.342226  145542 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0916 23:58:08.344441  145542 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0916 23:58:08.344725  145542 driver.go:421] Setting default libvirt URI to qemu:///system
	I0916 23:58:08.378937  145542 out.go:99] Using the kvm2 driver based on user configuration
	I0916 23:58:08.378976  145542 start.go:304] selected driver: kvm2
	I0916 23:58:08.378986  145542 start.go:918] validating driver "kvm2" against <nil>
	I0916 23:58:08.379305  145542 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 23:58:08.379416  145542 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21550-141589/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0916 23:58:08.393947  145542 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0916 23:58:08.394001  145542 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0916 23:58:08.394571  145542 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I0916 23:58:08.394724  145542 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0916 23:58:08.394775  145542 cni.go:84] Creating CNI manager for ""
	I0916 23:58:08.394848  145542 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0916 23:58:08.394875  145542 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0916 23:58:08.394961  145542 start.go:348] cluster config:
	{Name:download-only-150893 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-150893 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 23:58:08.395170  145542 iso.go:125] acquiring lock: {Name:mkbc497934aeda3bf1eaa3e96176da91d2f10b30 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 23:58:08.397038  145542 out.go:99] Downloading VM boot image ...
	I0916 23:58:08.397094  145542 download.go:108] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.37.0-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.37.0-amd64.iso.sha256 -> /home/jenkins/minikube-integration/21550-141589/.minikube/cache/iso/amd64/minikube-v1.37.0-amd64.iso
	I0916 23:58:10.824598  145542 out.go:99] Starting "download-only-150893" primary control-plane node in "download-only-150893" cluster
	I0916 23:58:10.824643  145542 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0916 23:58:10.851476  145542 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I0916 23:58:10.851524  145542 cache.go:58] Caching tarball of preloaded images
	I0916 23:58:10.851714  145542 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0916 23:58:10.853826  145542 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I0916 23:58:10.853877  145542 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 ...
	I0916 23:58:10.886960  145542 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21550-141589/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-150893 host does not exist
	  To start a cluster, run: "minikube start -p download-only-150893"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-150893
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/json-events (6.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-174003 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-174003 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (6.319105493s)
--- PASS: TestDownloadOnly/v1.34.0/json-events (6.32s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/preload-exists
I0916 23:58:22.356710  145530 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
I0916 23:58:22.356759  145530 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21550-141589/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-174003
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-174003: exit status 85 (60.598779ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                ARGS                                                                                                 │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-150893 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ download-only-150893 │ jenkins │ v1.37.0 │ 16 Sep 25 23:58 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                               │ minikube             │ jenkins │ v1.37.0 │ 16 Sep 25 23:58 UTC │ 16 Sep 25 23:58 UTC │
	│ delete  │ -p download-only-150893                                                                                                                                                                             │ download-only-150893 │ jenkins │ v1.37.0 │ 16 Sep 25 23:58 UTC │ 16 Sep 25 23:58 UTC │
	│ start   │ -o=json --download-only -p download-only-174003 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ download-only-174003 │ jenkins │ v1.37.0 │ 16 Sep 25 23:58 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/16 23:58:16
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 23:58:16.079473  145722 out.go:360] Setting OutFile to fd 1 ...
	I0916 23:58:16.079725  145722 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0916 23:58:16.079745  145722 out.go:374] Setting ErrFile to fd 2...
	I0916 23:58:16.079750  145722 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0916 23:58:16.079977  145722 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-141589/.minikube/bin
	I0916 23:58:16.080463  145722 out.go:368] Setting JSON to true
	I0916 23:58:16.081330  145722 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":9640,"bootTime":1758057456,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 23:58:16.081417  145722 start.go:140] virtualization: kvm guest
	I0916 23:58:16.083423  145722 out.go:99] [download-only-174003] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0916 23:58:16.083556  145722 notify.go:220] Checking for updates...
	I0916 23:58:16.084761  145722 out.go:171] MINIKUBE_LOCATION=21550
	I0916 23:58:16.086306  145722 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 23:58:16.087707  145722 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21550-141589/kubeconfig
	I0916 23:58:16.089104  145722 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-141589/.minikube
	I0916 23:58:16.090387  145722 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0916 23:58:16.092341  145722 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0916 23:58:16.092586  145722 driver.go:421] Setting default libvirt URI to qemu:///system
	I0916 23:58:16.123173  145722 out.go:99] Using the kvm2 driver based on user configuration
	I0916 23:58:16.123217  145722 start.go:304] selected driver: kvm2
	I0916 23:58:16.123227  145722 start.go:918] validating driver "kvm2" against <nil>
	I0916 23:58:16.123532  145722 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 23:58:16.123608  145722 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21550-141589/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0916 23:58:16.137838  145722 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I0916 23:58:16.137939  145722 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0916 23:58:16.138704  145722 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I0916 23:58:16.138918  145722 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0916 23:58:16.138955  145722 cni.go:84] Creating CNI manager for ""
	I0916 23:58:16.139021  145722 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0916 23:58:16.139034  145722 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0916 23:58:16.139126  145722 start.go:348] cluster config:
	{Name:download-only-174003 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:download-only-174003 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 23:58:16.139279  145722 iso.go:125] acquiring lock: {Name:mkbc497934aeda3bf1eaa3e96176da91d2f10b30 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 23:58:16.141508  145722 out.go:99] Starting "download-only-174003" primary control-plane node in "download-only-174003" cluster
	I0916 23:58:16.141532  145722 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0916 23:58:16.170237  145722 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.0/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0916 23:58:16.170276  145722 cache.go:58] Caching tarball of preloaded images
	I0916 23:58:16.170446  145722 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0916 23:58:16.172376  145722 out.go:99] Downloading Kubernetes v1.34.0 preload ...
	I0916 23:58:16.172396  145722 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 ...
	I0916 23:58:16.203706  145722 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.0/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2ff28357f4fb6607eaee8f503f8804cd -> /home/jenkins/minikube-integration/21550-141589/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0916 23:58:18.402130  145722 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 ...
	I0916 23:58:18.402230  145722 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/21550-141589/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 ...
	I0916 23:58:19.177630  145722 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0916 23:58:19.178000  145722 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/download-only-174003/config.json ...
	I0916 23:58:19.178054  145722 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/download-only-174003/config.json: {Name:mk121eeb8907544fe8693caad105d465d37c89c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 23:58:19.178233  145722 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0916 23:58:19.178387  145722 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/21550-141589/.minikube/cache/linux/amd64/v1.34.0/kubectl
	
	
	* The control-plane node download-only-174003 host does not exist
	  To start a cluster, run: "minikube start -p download-only-174003"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-174003
--- PASS: TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.64s)

                                                
                                                
=== RUN   TestBinaryMirror
I0916 23:58:22.966315  145530 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-630200 --alsologtostderr --binary-mirror http://127.0.0.1:36323 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
helpers_test.go:175: Cleaning up "binary-mirror-630200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-630200
--- PASS: TestBinaryMirror (0.64s)

                                                
                                    
x
+
TestOffline (111.28s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-551588 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-551588 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m49.865584796s)
helpers_test.go:175: Cleaning up "offline-crio-551588" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-551588
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-551588: (1.411384272s)
--- PASS: TestOffline (111.28s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-772113
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-772113: exit status 85 (53.059927ms)

                                                
                                                
-- stdout --
	* Profile "addons-772113" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-772113"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-772113
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-772113: exit status 85 (52.387987ms)

                                                
                                                
-- stdout --
	* Profile "addons-772113" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-772113"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (208.98s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-772113 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-772113 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m28.983786674s)
--- PASS: TestAddons/Setup (208.98s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-772113 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-772113 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.65s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-772113 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-772113 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [eac57424-ccea-45eb-a612-1e6f0b0fc281] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [eac57424-ccea-45eb-a612-1e6f0b0fc281] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.00512093s
addons_test.go:694: (dbg) Run:  kubectl --context addons-772113 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-772113 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-772113 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.65s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.45s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 11.660725ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-gpg82" [fa17d4ca-3961-45bd-80b1-36bb60e50186] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.097836486s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-69jw9" [9c7b1cd3-d6e3-4846-9991-541d66666aff] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004752525s
addons_test.go:392: (dbg) Run:  kubectl --context addons-772113 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-772113 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-772113 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.286381534s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-772113 ip
2025/09/17 00:02:26 [DEBUG] GET http://192.168.50.205:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-772113 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.45s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.86s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 4.368458ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-772113
addons_test.go:332: (dbg) Run:  kubectl --context addons-772113 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-772113 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.86s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.32s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-2zjwp" [58263e6e-d425-489e-ad7b-499cdfd090f5] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.006471622s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-772113 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (6.32s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.96s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 7.751431ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-9q4s4" [a22821ae-c2fa-4dc6-8854-949d14a6c5bd] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.005277171s
addons_test.go:463: (dbg) Run:  kubectl --context addons-772113 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-772113 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.96s)

                                                
                                    
x
+
TestAddons/parallel/CSI (51.24s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0917 00:02:27.859770  145530 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0917 00:02:27.868333  145530 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0917 00:02:27.868377  145530 kapi.go:107] duration metric: took 8.634079ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 8.651887ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-772113 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-772113 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-772113 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-772113 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-772113 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [b6dbc110-98cb-4edc-b8a5-d238fd0067ca] Pending
helpers_test.go:352: "task-pv-pod" [b6dbc110-98cb-4edc-b8a5-d238fd0067ca] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [b6dbc110-98cb-4edc-b8a5-d238fd0067ca] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 24.004953059s
addons_test.go:572: (dbg) Run:  kubectl --context addons-772113 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-772113 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:435: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:427: (dbg) Run:  kubectl --context addons-772113 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-772113 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-772113 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-772113 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-772113 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-772113 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-772113 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-772113 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-772113 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-772113 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-772113 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [1bb2afa8-a9e1-4a46-859d-71aab33cf09a] Pending
helpers_test.go:352: "task-pv-pod-restore" [1bb2afa8-a9e1-4a46-859d-71aab33cf09a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [1bb2afa8-a9e1-4a46-859d-71aab33cf09a] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.00528336s
addons_test.go:614: (dbg) Run:  kubectl --context addons-772113 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-772113 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-772113 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-772113 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-772113 addons disable volumesnapshots --alsologtostderr -v=1: (1.051026666s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-772113 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-772113 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.216622723s)
--- PASS: TestAddons/parallel/CSI (51.24s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (28.47s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-772113 --alsologtostderr -v=1
addons_test.go:808: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-772113 --alsologtostderr -v=1: (1.405113742s)
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-85f8f8dc54-5plm5" [dfc7daa7-0a00-4f54-a9be-f504a8ffd1d4] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-85f8f8dc54-5plm5" [dfc7daa7-0a00-4f54-a9be-f504a8ffd1d4] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 21.004539087s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-772113 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-772113 addons disable headlamp --alsologtostderr -v=1: (6.058285861s)
--- PASS: TestAddons/parallel/Headlamp (28.47s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.93s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-85f6b7fc65-2xng2" [a13d0550-0b52-4cbf-8c50-084efc4d0048] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.09797219s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-772113 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.93s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (54.62s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-772113 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-772113 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-772113 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-772113 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-772113 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-772113 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-772113 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-772113 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [5b58fd72-98eb-4930-b4a0-4c969da0cb56] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [5b58fd72-98eb-4930-b4a0-4c969da0cb56] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [5b58fd72-98eb-4930-b4a0-4c969da0cb56] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.003936582s
addons_test.go:967: (dbg) Run:  kubectl --context addons-772113 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-772113 ssh "cat /opt/local-path-provisioner/pvc-0f753cd3-a1fd-4a21-92c7-ac96b7a52aac_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-772113 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-772113 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-772113 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-772113 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.589534385s)
--- PASS: TestAddons/parallel/LocalPath (54.62s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.87s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-gn4ld" [f1ded348-a976-4f31-bdc9-c829d0ef1245] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.017891343s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-772113 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.87s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.64s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-96fv8" [aecaf7af-0149-485b-b31c-ca83eb69dd81] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.006918149s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-772113 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-772113 addons disable yakd --alsologtostderr -v=1: (6.63574095s)
--- PASS: TestAddons/parallel/Yakd (12.64s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (87.08s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-772113
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-772113: (1m26.775615644s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-772113
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-772113
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-772113
--- PASS: TestAddons/StoppedEnableDisable (87.08s)

                                                
                                    
x
+
TestCertOptions (52.04s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-326277 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-326277 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (49.489923499s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-326277 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-326277 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-326277 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-326277" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-326277
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-326277: (2.02472008s)
--- PASS: TestCertOptions (52.04s)

                                                
                                    
x
+
TestCertExpiration (315.65s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-867223 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-867223 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (46.796685138s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-867223 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-867223 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m27.951266341s)
helpers_test.go:175: Cleaning up "cert-expiration-867223" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-867223
--- PASS: TestCertExpiration (315.65s)

                                                
                                    
x
+
TestForceSystemdFlag (72.71s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-487816 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-487816 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m11.536655793s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-487816 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-487816" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-487816
--- PASS: TestForceSystemdFlag (72.71s)

                                                
                                    
x
+
TestForceSystemdEnv (45.6s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-601776 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-601776 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (44.016622118s)
helpers_test.go:175: Cleaning up "force-systemd-env-601776" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-601776
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-601776: (1.578192496s)
--- PASS: TestForceSystemdEnv (45.60s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.22s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0917 01:08:41.170650  145530 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0917 01:08:41.170848  145530 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0917 01:08:41.205834  145530 install.go:62] docker-machine-driver-kvm2: exit status 1
W0917 01:08:41.206094  145530 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0917 01:08:41.206175  145530 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3035183839/001/docker-machine-driver-kvm2
I0917 01:08:41.491443  145530 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3035183839/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5b71c80 0x5b71c80 0x5b71c80 0x5b71c80 0x5b71c80 0x5b71c80 0x5b71c80] Decompressors:map[bz2:0xc00028e3b0 gz:0xc00028e3b8 tar:0xc00028e360 tar.bz2:0xc00028e370 tar.gz:0xc00028e380 tar.xz:0xc00028e390 tar.zst:0xc00028e3a0 tbz2:0xc00028e370 tgz:0xc00028e380 txz:0xc00028e390 tzst:0xc00028e3a0 xz:0xc00028e3c0 zip:0xc00028e3d0 zst:0xc00028e3c8] Getters:map[file:0xc0013dafc0 http:0xc000668640 https:0xc000668690] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0917 01:08:41.491521  145530 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3035183839/001/docker-machine-driver-kvm2
I0917 01:08:43.025620  145530 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0917 01:08:43.025768  145530 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0917 01:08:43.060537  145530 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0917 01:08:43.060574  145530 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0917 01:08:43.060662  145530 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0917 01:08:43.060714  145530 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3035183839/002/docker-machine-driver-kvm2
I0917 01:08:43.228180  145530 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3035183839/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5b71c80 0x5b71c80 0x5b71c80 0x5b71c80 0x5b71c80 0x5b71c80 0x5b71c80] Decompressors:map[bz2:0xc00028e3b0 gz:0xc00028e3b8 tar:0xc00028e360 tar.bz2:0xc00028e370 tar.gz:0xc00028e380 tar.xz:0xc00028e390 tar.zst:0xc00028e3a0 tbz2:0xc00028e370 tgz:0xc00028e380 txz:0xc00028e390 tzst:0xc00028e3a0 xz:0xc00028e3c0 zip:0xc00028e3d0 zst:0xc00028e3c8] Getters:map[file:0xc000b051c0 http:0xc000670f50 https:0xc000670fa0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0917 01:08:43.228258  145530 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3035183839/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (3.22s)

                                                
                                    
x
+
TestErrorSpam/setup (40.4s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-676221 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-676221 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E0917 00:06:53.406224  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/addons-772113/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:06:53.412821  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/addons-772113/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:06:53.424437  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/addons-772113/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:06:53.445957  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/addons-772113/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:06:53.487540  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/addons-772113/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:06:53.569015  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/addons-772113/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:06:53.730624  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/addons-772113/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:06:54.052410  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/addons-772113/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:06:54.694627  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/addons-772113/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:06:55.976284  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/addons-772113/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:06:58.539056  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/addons-772113/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:07:03.660839  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/addons-772113/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:07:13.902958  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/addons-772113/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-676221 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-676221 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (40.402536193s)
--- PASS: TestErrorSpam/setup (40.40s)

                                                
                                    
x
+
TestErrorSpam/start (0.39s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-676221 --log_dir /tmp/nospam-676221 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-676221 --log_dir /tmp/nospam-676221 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-676221 --log_dir /tmp/nospam-676221 start --dry-run
--- PASS: TestErrorSpam/start (0.39s)

                                                
                                    
x
+
TestErrorSpam/status (0.88s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-676221 --log_dir /tmp/nospam-676221 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-676221 --log_dir /tmp/nospam-676221 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-676221 --log_dir /tmp/nospam-676221 status
--- PASS: TestErrorSpam/status (0.88s)

                                                
                                    
x
+
TestErrorSpam/pause (1.96s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-676221 --log_dir /tmp/nospam-676221 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-676221 --log_dir /tmp/nospam-676221 pause
E0917 00:07:34.384437  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/addons-772113/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-676221 --log_dir /tmp/nospam-676221 pause
--- PASS: TestErrorSpam/pause (1.96s)

                                                
                                    
x
+
TestErrorSpam/unpause (2.46s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-676221 --log_dir /tmp/nospam-676221 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-676221 --log_dir /tmp/nospam-676221 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-676221 --log_dir /tmp/nospam-676221 unpause
--- PASS: TestErrorSpam/unpause (2.46s)

                                                
                                    
x
+
TestErrorSpam/stop (5.47s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-676221 --log_dir /tmp/nospam-676221 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-676221 --log_dir /tmp/nospam-676221 stop: (2.513023525s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-676221 --log_dir /tmp/nospam-676221 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-676221 --log_dir /tmp/nospam-676221 stop: (1.518774519s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-676221 --log_dir /tmp/nospam-676221 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-676221 --log_dir /tmp/nospam-676221 stop: (1.435761138s)
--- PASS: TestErrorSpam/stop (5.47s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21550-141589/.minikube/files/etc/test/nested/copy/145530/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (83.46s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-456067 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E0917 00:08:15.346941  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/addons-772113/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-456067 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m23.463191635s)
--- PASS: TestFunctional/serial/StartWithProxy (83.46s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (54.61s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0917 00:09:07.201162  145530 config.go:182] Loaded profile config "functional-456067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-456067 --alsologtostderr -v=8
E0917 00:09:37.271286  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/addons-772113/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-456067 --alsologtostderr -v=8: (54.607263925s)
functional_test.go:678: soft start took 54.608164405s for "functional-456067" cluster.
I0917 00:10:01.808906  145530 config.go:182] Loaded profile config "functional-456067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/SoftStart (54.61s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-456067 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.39s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-456067 cache add registry.k8s.io/pause:3.1: (1.233577105s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-456067 cache add registry.k8s.io/pause:3.3: (1.958177505s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-456067 cache add registry.k8s.io/pause:latest: (1.194128457s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.39s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.56s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-456067 /tmp/TestFunctionalserialCacheCmdcacheadd_local445324290/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 cache add minikube-local-cache-test:functional-456067
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-456067 cache add minikube-local-cache-test:functional-456067: (1.201417605s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 cache delete minikube-local-cache-test:functional-456067
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-456067
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.56s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.74s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-456067 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (217.708343ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-linux-amd64 -p functional-456067 cache reload: (1.0415763s)
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.74s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 kubectl -- --context functional-456067 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-456067 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (31.59s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-456067 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-456067 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (31.593331936s)
functional_test.go:776: restart took 31.593474584s for "functional-456067" cluster.
I0917 00:10:41.879775  145530 config.go:182] Loaded profile config "functional-456067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/ExtraConfig (31.59s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-456067 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.57s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-456067 logs: (1.56463947s)
--- PASS: TestFunctional/serial/LogsCmd (1.57s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.57s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 logs --file /tmp/TestFunctionalserialLogsFileCmd2110626006/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-456067 logs --file /tmp/TestFunctionalserialLogsFileCmd2110626006/001/logs.txt: (1.56743659s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.57s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.56s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-456067 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-456067
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-456067: exit status 115 (305.057433ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL             │
	├───────────┼─────────────┼─────────────┼────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.50.44:30184 │
	└───────────┴─────────────┴─────────────┴────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-456067 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-456067 delete -f testdata/invalidsvc.yaml: (1.047383346s)
--- PASS: TestFunctional/serial/InvalidService (4.56s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-456067 config get cpus: exit status 14 (54.575056ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-456067 config get cpus: exit status 14 (51.715207ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-456067 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-456067 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 23 (135.709824ms)

                                                
                                                
-- stdout --
	* [functional-456067] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21550
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21550-141589/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-141589/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 00:12:13.978922  154439 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:12:13.979036  154439 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:12:13.979045  154439 out.go:374] Setting ErrFile to fd 2...
	I0917 00:12:13.979049  154439 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:12:13.979300  154439 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-141589/.minikube/bin
	I0917 00:12:13.979732  154439 out.go:368] Setting JSON to false
	I0917 00:12:13.980670  154439 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":10478,"bootTime":1758057456,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 00:12:13.980773  154439 start.go:140] virtualization: kvm guest
	I0917 00:12:13.983008  154439 out.go:179] * [functional-456067] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0917 00:12:13.984535  154439 out.go:179]   - MINIKUBE_LOCATION=21550
	I0917 00:12:13.984515  154439 notify.go:220] Checking for updates...
	I0917 00:12:13.987311  154439 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 00:12:13.988734  154439 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-141589/kubeconfig
	I0917 00:12:13.990096  154439 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-141589/.minikube
	I0917 00:12:13.991543  154439 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 00:12:13.992887  154439 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 00:12:13.994498  154439 config.go:182] Loaded profile config "functional-456067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:12:13.994880  154439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 00:12:13.994973  154439 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 00:12:14.009258  154439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37183
	I0917 00:12:14.009885  154439 main.go:141] libmachine: () Calling .GetVersion
	I0917 00:12:14.010526  154439 main.go:141] libmachine: Using API Version  1
	I0917 00:12:14.010563  154439 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 00:12:14.011087  154439 main.go:141] libmachine: () Calling .GetMachineName
	I0917 00:12:14.011358  154439 main.go:141] libmachine: (functional-456067) Calling .DriverName
	I0917 00:12:14.011660  154439 driver.go:421] Setting default libvirt URI to qemu:///system
	I0917 00:12:14.012037  154439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 00:12:14.012092  154439 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 00:12:14.027328  154439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43039
	I0917 00:12:14.027798  154439 main.go:141] libmachine: () Calling .GetVersion
	I0917 00:12:14.028307  154439 main.go:141] libmachine: Using API Version  1
	I0917 00:12:14.028332  154439 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 00:12:14.028690  154439 main.go:141] libmachine: () Calling .GetMachineName
	I0917 00:12:14.028878  154439 main.go:141] libmachine: (functional-456067) Calling .DriverName
	I0917 00:12:14.060183  154439 out.go:179] * Using the kvm2 driver based on existing profile
	I0917 00:12:14.061783  154439 start.go:304] selected driver: kvm2
	I0917 00:12:14.061809  154439 start.go:918] validating driver "kvm2" against &{Name:functional-456067 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.37.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-456067 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.44 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:12:14.062048  154439 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 00:12:14.064344  154439 out.go:203] 
	W0917 00:12:14.065432  154439 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0917 00:12:14.066496  154439 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-456067 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
--- PASS: TestFunctional/parallel/DryRun (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-456067 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-456067 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 23 (135.94526ms)

                                                
                                                
-- stdout --
	* [functional-456067] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21550
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21550-141589/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-141589/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 00:11:05.768596  153347 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:11:05.768707  153347 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:11:05.768713  153347 out.go:374] Setting ErrFile to fd 2...
	I0917 00:11:05.768718  153347 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:11:05.769062  153347 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-141589/.minikube/bin
	I0917 00:11:05.769555  153347 out.go:368] Setting JSON to false
	I0917 00:11:05.770441  153347 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":10410,"bootTime":1758057456,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 00:11:05.770535  153347 start.go:140] virtualization: kvm guest
	I0917 00:11:05.772693  153347 out.go:179] * [functional-456067] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I0917 00:11:05.774172  153347 out.go:179]   - MINIKUBE_LOCATION=21550
	I0917 00:11:05.774158  153347 notify.go:220] Checking for updates...
	I0917 00:11:05.775620  153347 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 00:11:05.777161  153347 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-141589/kubeconfig
	I0917 00:11:05.778402  153347 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-141589/.minikube
	I0917 00:11:05.779606  153347 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 00:11:05.781057  153347 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 00:11:05.782893  153347 config.go:182] Loaded profile config "functional-456067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:11:05.783339  153347 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 00:11:05.783426  153347 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 00:11:05.796876  153347 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42031
	I0917 00:11:05.797372  153347 main.go:141] libmachine: () Calling .GetVersion
	I0917 00:11:05.797998  153347 main.go:141] libmachine: Using API Version  1
	I0917 00:11:05.798051  153347 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 00:11:05.798478  153347 main.go:141] libmachine: () Calling .GetMachineName
	I0917 00:11:05.798699  153347 main.go:141] libmachine: (functional-456067) Calling .DriverName
	I0917 00:11:05.799041  153347 driver.go:421] Setting default libvirt URI to qemu:///system
	I0917 00:11:05.799358  153347 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 00:11:05.799400  153347 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 00:11:05.812973  153347 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34555
	I0917 00:11:05.813425  153347 main.go:141] libmachine: () Calling .GetVersion
	I0917 00:11:05.813905  153347 main.go:141] libmachine: Using API Version  1
	I0917 00:11:05.813929  153347 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 00:11:05.814299  153347 main.go:141] libmachine: () Calling .GetMachineName
	I0917 00:11:05.814481  153347 main.go:141] libmachine: (functional-456067) Calling .DriverName
	I0917 00:11:05.845677  153347 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I0917 00:11:05.847151  153347 start.go:304] selected driver: kvm2
	I0917 00:11:05.847184  153347 start.go:918] validating driver "kvm2" against &{Name:functional-456067 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.37.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-456067 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.44 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:11:05.847304  153347 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 00:11:05.849784  153347 out.go:203] 
	W0917 00:11:05.851199  153347 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0917 00:11:05.852413  153347 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (19.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-456067 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-456067 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-b8f7n" [75846762-7135-48fa-b2aa-8d1927545a18] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-b8f7n" [75846762-7135-48fa-b2aa-8d1927545a18] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 19.003299013s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.50.44:32321
functional_test.go:1680: http://192.168.50.44:32321: success! body:
Request served by hello-node-connect-7d85dfc575-b8f7n

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.50.44:32321
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (19.55s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 ssh -n functional-456067 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 cp functional-456067:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2626508368/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 ssh -n functional-456067 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 ssh -n functional-456067 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (23.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-456067 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-fk8qm" [d5aab6dc-8703-4af7-bd0b-093f75de9f53] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-fk8qm" [d5aab6dc-8703-4af7-bd0b-093f75de9f53] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 19.075797822s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-456067 exec mysql-5bb876957f-fk8qm -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-456067 exec mysql-5bb876957f-fk8qm -- mysql -ppassword -e "show databases;": exit status 1 (170.386503ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0917 00:11:09.576946  145530 retry.go:31] will retry after 571.150182ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-456067 exec mysql-5bb876957f-fk8qm -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-456067 exec mysql-5bb876957f-fk8qm -- mysql -ppassword -e "show databases;": exit status 1 (154.917524ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0917 00:11:10.304091  145530 retry.go:31] will retry after 1.620842303s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-456067 exec mysql-5bb876957f-fk8qm -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-456067 exec mysql-5bb876957f-fk8qm -- mysql -ppassword -e "show databases;": exit status 1 (131.892916ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0917 00:11:12.057510  145530 retry.go:31] will retry after 1.776809204s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-456067 exec mysql-5bb876957f-fk8qm -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (23.83s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/145530/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 ssh "sudo cat /etc/test/nested/copy/145530/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/145530.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 ssh "sudo cat /etc/ssl/certs/145530.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/145530.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 ssh "sudo cat /usr/share/ca-certificates/145530.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/1455302.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 ssh "sudo cat /etc/ssl/certs/1455302.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/1455302.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 ssh "sudo cat /usr/share/ca-certificates/1455302.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-456067 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-456067 ssh "sudo systemctl is-active docker": exit status 1 (230.174902ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-456067 ssh "sudo systemctl is-active containerd": exit status 1 (227.554438ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-456067 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.0
registry.k8s.io/kube-proxy:v1.34.0
registry.k8s.io/kube-controller-manager:v1.34.0
registry.k8s.io/kube-apiserver:v1.34.0
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-456067
localhost/kicbase/echo-server:functional-456067
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-456067 image ls --format short --alsologtostderr:
I0917 00:12:15.063150  154602 out.go:360] Setting OutFile to fd 1 ...
I0917 00:12:15.063280  154602 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0917 00:12:15.063290  154602 out.go:374] Setting ErrFile to fd 2...
I0917 00:12:15.063293  154602 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0917 00:12:15.063489  154602 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-141589/.minikube/bin
I0917 00:12:15.064110  154602 config.go:182] Loaded profile config "functional-456067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0917 00:12:15.064205  154602 config.go:182] Loaded profile config "functional-456067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0917 00:12:15.064569  154602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0917 00:12:15.064625  154602 main.go:141] libmachine: Launching plugin server for driver kvm2
I0917 00:12:15.078798  154602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36829
I0917 00:12:15.079349  154602 main.go:141] libmachine: () Calling .GetVersion
I0917 00:12:15.079932  154602 main.go:141] libmachine: Using API Version  1
I0917 00:12:15.079965  154602 main.go:141] libmachine: () Calling .SetConfigRaw
I0917 00:12:15.080356  154602 main.go:141] libmachine: () Calling .GetMachineName
I0917 00:12:15.080570  154602 main.go:141] libmachine: (functional-456067) Calling .GetState
I0917 00:12:15.082956  154602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0917 00:12:15.083002  154602 main.go:141] libmachine: Launching plugin server for driver kvm2
I0917 00:12:15.097495  154602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44617
I0917 00:12:15.097970  154602 main.go:141] libmachine: () Calling .GetVersion
I0917 00:12:15.098454  154602 main.go:141] libmachine: Using API Version  1
I0917 00:12:15.098476  154602 main.go:141] libmachine: () Calling .SetConfigRaw
I0917 00:12:15.098909  154602 main.go:141] libmachine: () Calling .GetMachineName
I0917 00:12:15.099122  154602 main.go:141] libmachine: (functional-456067) Calling .DriverName
I0917 00:12:15.099368  154602 ssh_runner.go:195] Run: systemctl --version
I0917 00:12:15.099394  154602 main.go:141] libmachine: (functional-456067) Calling .GetSSHHostname
I0917 00:12:15.102396  154602 main.go:141] libmachine: (functional-456067) DBG | domain functional-456067 has defined MAC address 52:54:00:03:de:c7 in network mk-functional-456067
I0917 00:12:15.102814  154602 main.go:141] libmachine: (functional-456067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:de:c7", ip: ""} in network mk-functional-456067: {Iface:virbr2 ExpiryTime:2025-09-17 01:08:00 +0000 UTC Type:0 Mac:52:54:00:03:de:c7 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:functional-456067 Clientid:01:52:54:00:03:de:c7}
I0917 00:12:15.102849  154602 main.go:141] libmachine: (functional-456067) DBG | domain functional-456067 has defined IP address 192.168.50.44 and MAC address 52:54:00:03:de:c7 in network mk-functional-456067
I0917 00:12:15.102963  154602 main.go:141] libmachine: (functional-456067) Calling .GetSSHPort
I0917 00:12:15.103144  154602 main.go:141] libmachine: (functional-456067) Calling .GetSSHKeyPath
I0917 00:12:15.103300  154602 main.go:141] libmachine: (functional-456067) Calling .GetSSHUsername
I0917 00:12:15.103443  154602 sshutil.go:53] new ssh client: &{IP:192.168.50.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21550-141589/.minikube/machines/functional-456067/id_rsa Username:docker}
I0917 00:12:15.181573  154602 ssh_runner.go:195] Run: sudo crictl images --output json
I0917 00:12:15.223734  154602 main.go:141] libmachine: Making call to close driver server
I0917 00:12:15.223753  154602 main.go:141] libmachine: (functional-456067) Calling .Close
I0917 00:12:15.224120  154602 main.go:141] libmachine: Successfully made call to close driver server
I0917 00:12:15.224143  154602 main.go:141] libmachine: Making call to close connection to plugin binary
I0917 00:12:15.224162  154602 main.go:141] libmachine: Making call to close driver server
I0917 00:12:15.224164  154602 main.go:141] libmachine: (functional-456067) DBG | Closing plugin on server side
I0917 00:12:15.224172  154602 main.go:141] libmachine: (functional-456067) Calling .Close
I0917 00:12:15.224474  154602 main.go:141] libmachine: (functional-456067) DBG | Closing plugin on server side
I0917 00:12:15.224493  154602 main.go:141] libmachine: Successfully made call to close driver server
I0917 00:12:15.224543  154602 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-456067 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ localhost/my-image                      │ functional-456067  │ 99d57ab3ebc35 │ 1.47MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/kube-scheduler          │ v1.34.0            │ 46169d968e920 │ 53.8MB │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.95MB │
│ localhost/kicbase/echo-server           │ functional-456067  │ 9056ab77afb8e │ 4.95MB │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.0            │ 90550c43ad2bc │ 89.1MB │
│ registry.k8s.io/kube-proxy              │ v1.34.0            │ df0860106674d │ 73.1MB │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ localhost/minikube-local-cache-test     │ functional-456067  │ 27e1dbfde6a43 │ 3.33kB │
│ registry.k8s.io/kube-controller-manager │ v1.34.0            │ a0af72f2ec6d6 │ 76MB   │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-456067 image ls --format table --alsologtostderr:
I0917 00:12:18.155025  154768 out.go:360] Setting OutFile to fd 1 ...
I0917 00:12:18.155308  154768 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0917 00:12:18.155317  154768 out.go:374] Setting ErrFile to fd 2...
I0917 00:12:18.155324  154768 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0917 00:12:18.155521  154768 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-141589/.minikube/bin
I0917 00:12:18.156169  154768 config.go:182] Loaded profile config "functional-456067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0917 00:12:18.156293  154768 config.go:182] Loaded profile config "functional-456067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0917 00:12:18.156695  154768 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0917 00:12:18.156770  154768 main.go:141] libmachine: Launching plugin server for driver kvm2
I0917 00:12:18.170918  154768 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35259
I0917 00:12:18.171603  154768 main.go:141] libmachine: () Calling .GetVersion
I0917 00:12:18.172219  154768 main.go:141] libmachine: Using API Version  1
I0917 00:12:18.172249  154768 main.go:141] libmachine: () Calling .SetConfigRaw
I0917 00:12:18.172768  154768 main.go:141] libmachine: () Calling .GetMachineName
I0917 00:12:18.173098  154768 main.go:141] libmachine: (functional-456067) Calling .GetState
I0917 00:12:18.175595  154768 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0917 00:12:18.175650  154768 main.go:141] libmachine: Launching plugin server for driver kvm2
I0917 00:12:18.189606  154768 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39205
I0917 00:12:18.190181  154768 main.go:141] libmachine: () Calling .GetVersion
I0917 00:12:18.190797  154768 main.go:141] libmachine: Using API Version  1
I0917 00:12:18.190828  154768 main.go:141] libmachine: () Calling .SetConfigRaw
I0917 00:12:18.191231  154768 main.go:141] libmachine: () Calling .GetMachineName
I0917 00:12:18.191456  154768 main.go:141] libmachine: (functional-456067) Calling .DriverName
I0917 00:12:18.191704  154768 ssh_runner.go:195] Run: systemctl --version
I0917 00:12:18.191745  154768 main.go:141] libmachine: (functional-456067) Calling .GetSSHHostname
I0917 00:12:18.195281  154768 main.go:141] libmachine: (functional-456067) DBG | domain functional-456067 has defined MAC address 52:54:00:03:de:c7 in network mk-functional-456067
I0917 00:12:18.195707  154768 main.go:141] libmachine: (functional-456067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:de:c7", ip: ""} in network mk-functional-456067: {Iface:virbr2 ExpiryTime:2025-09-17 01:08:00 +0000 UTC Type:0 Mac:52:54:00:03:de:c7 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:functional-456067 Clientid:01:52:54:00:03:de:c7}
I0917 00:12:18.195741  154768 main.go:141] libmachine: (functional-456067) DBG | domain functional-456067 has defined IP address 192.168.50.44 and MAC address 52:54:00:03:de:c7 in network mk-functional-456067
I0917 00:12:18.195921  154768 main.go:141] libmachine: (functional-456067) Calling .GetSSHPort
I0917 00:12:18.196113  154768 main.go:141] libmachine: (functional-456067) Calling .GetSSHKeyPath
I0917 00:12:18.196285  154768 main.go:141] libmachine: (functional-456067) Calling .GetSSHUsername
I0917 00:12:18.196474  154768 sshutil.go:53] new ssh client: &{IP:192.168.50.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21550-141589/.minikube/machines/functional-456067/id_rsa Username:docker}
I0917 00:12:18.273196  154768 ssh_runner.go:195] Run: sudo crictl images --output json
I0917 00:12:18.341198  154768 main.go:141] libmachine: Making call to close driver server
I0917 00:12:18.341218  154768 main.go:141] libmachine: (functional-456067) Calling .Close
I0917 00:12:18.341529  154768 main.go:141] libmachine: Successfully made call to close driver server
I0917 00:12:18.341546  154768 main.go:141] libmachine: Making call to close connection to plugin binary
I0917 00:12:18.341559  154768 main.go:141] libmachine: Making call to close driver server
I0917 00:12:18.341567  154768 main.go:141] libmachine: (functional-456067) Calling .Close
I0917 00:12:18.341820  154768 main.go:141] libmachine: Successfully made call to close driver server
I0917 00:12:18.341848  154768 main.go:141] libmachine: Making call to close connection to plugin binary
I0917 00:12:18.341883  154768 main.go:141] libmachine: (functional-456067) DBG | Closing plugin on server side
E0917 00:12:21.113745  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/addons-772113/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-456067 image ls --format json --alsologtostderr:
[{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"99d57ab3ebc355a70f265be625963bbc8548a91610d0f0204a0153dcd49fee68","repoDigests":["localhost/my-image@sha256:f25156c00389f48a52e5230320897aa7115aed3ee8652c9cbb82575de580d36c"],"repoTags":["localhost/my-image:functional-456067"],"size":"1468599"},{"id":"a0af72f2ec6d628152b015a
46d4074df8f77d5b686978987c70f48b8c7660634","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:82ea603ed3cce63f9f870f22299741e0011318391cf722dd924a1d5a9f8ce6f6","registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.0"],"size":"76004183"},{"id":"df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce","repoDigests":["registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067","registry.k8s.io/kube-proxy@sha256:5f71731a5eadcf74f3997dfc159bf5ca36e48c3387c19082fc21871e0dbb19af"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.0"],"size":"73138071"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86"
,"docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-456067"],"size":"4945246"},{"id":"93cc1d90764a199b56d48505465c17154ae47c146345858df25c53de810e2d83","repoDigests":["docker.io/library/7d40988b1f7af4a508a0c68c7123a26820c5eb870e7e0947911bf7b8ae06d10a-tmp@sha256:9c38501932fe0afff346b1e51f6a973671707290310ddd0ba4e198d8d2b5e471"],"repoTags":[],"size":"1466017"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","g
cr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"27e1dbfde6a431288ffe770a97cdb91119202d6942cc7641ad17457fd76ce7b1","repoDigests":["localhost/minikube-local-cache-test@sha256:6b22c088eec8c05775fe108dede960d132f38cfbd9eb2950af50374147f3065c"],"repoTags":["localhost/minikube-local-cache-test:functional-456067"],"size":"3330"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90","repoDigests":["registry.k8s.io/kube-apiserver@sha256:495d3253a47a9a64a62041d518678c8b101fb628488e729d9f52ddff7cf82a86","registry.k8s.i
o/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.0"],"size":"89050097"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc","repoDigests":["registry.k8s.io/kube-scheduler@sha256:31b77e40d737b6d3e3b19b4afd681c9362aef06353075895452fc9a41fe34140","registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.0"],"size":"53844823"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278
fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/librar
y/mysql:5.7"],"size":"519571821"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-456067 image ls --format json --alsologtostderr:
I0917 00:12:17.935459  154744 out.go:360] Setting OutFile to fd 1 ...
I0917 00:12:17.935578  154744 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0917 00:12:17.935583  154744 out.go:374] Setting ErrFile to fd 2...
I0917 00:12:17.935587  154744 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0917 00:12:17.935766  154744 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-141589/.minikube/bin
I0917 00:12:17.936351  154744 config.go:182] Loaded profile config "functional-456067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0917 00:12:17.936452  154744 config.go:182] Loaded profile config "functional-456067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0917 00:12:17.936809  154744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0917 00:12:17.936881  154744 main.go:141] libmachine: Launching plugin server for driver kvm2
I0917 00:12:17.950449  154744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34953
I0917 00:12:17.951001  154744 main.go:141] libmachine: () Calling .GetVersion
I0917 00:12:17.951518  154744 main.go:141] libmachine: Using API Version  1
I0917 00:12:17.951534  154744 main.go:141] libmachine: () Calling .SetConfigRaw
I0917 00:12:17.951933  154744 main.go:141] libmachine: () Calling .GetMachineName
I0917 00:12:17.952148  154744 main.go:141] libmachine: (functional-456067) Calling .GetState
I0917 00:12:17.954506  154744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0917 00:12:17.954562  154744 main.go:141] libmachine: Launching plugin server for driver kvm2
I0917 00:12:17.968535  154744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37561
I0917 00:12:17.969098  154744 main.go:141] libmachine: () Calling .GetVersion
I0917 00:12:17.969644  154744 main.go:141] libmachine: Using API Version  1
I0917 00:12:17.969671  154744 main.go:141] libmachine: () Calling .SetConfigRaw
I0917 00:12:17.970080  154744 main.go:141] libmachine: () Calling .GetMachineName
I0917 00:12:17.970305  154744 main.go:141] libmachine: (functional-456067) Calling .DriverName
I0917 00:12:17.970554  154744 ssh_runner.go:195] Run: systemctl --version
I0917 00:12:17.970590  154744 main.go:141] libmachine: (functional-456067) Calling .GetSSHHostname
I0917 00:12:17.974186  154744 main.go:141] libmachine: (functional-456067) DBG | domain functional-456067 has defined MAC address 52:54:00:03:de:c7 in network mk-functional-456067
I0917 00:12:17.974923  154744 main.go:141] libmachine: (functional-456067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:de:c7", ip: ""} in network mk-functional-456067: {Iface:virbr2 ExpiryTime:2025-09-17 01:08:00 +0000 UTC Type:0 Mac:52:54:00:03:de:c7 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:functional-456067 Clientid:01:52:54:00:03:de:c7}
I0917 00:12:17.974951  154744 main.go:141] libmachine: (functional-456067) DBG | domain functional-456067 has defined IP address 192.168.50.44 and MAC address 52:54:00:03:de:c7 in network mk-functional-456067
I0917 00:12:17.975161  154744 main.go:141] libmachine: (functional-456067) Calling .GetSSHPort
I0917 00:12:17.975340  154744 main.go:141] libmachine: (functional-456067) Calling .GetSSHKeyPath
I0917 00:12:17.975493  154744 main.go:141] libmachine: (functional-456067) Calling .GetSSHUsername
I0917 00:12:17.975668  154744 sshutil.go:53] new ssh client: &{IP:192.168.50.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21550-141589/.minikube/machines/functional-456067/id_rsa Username:docker}
I0917 00:12:18.053374  154744 ssh_runner.go:195] Run: sudo crictl images --output json
I0917 00:12:18.100998  154744 main.go:141] libmachine: Making call to close driver server
I0917 00:12:18.101009  154744 main.go:141] libmachine: (functional-456067) Calling .Close
I0917 00:12:18.101350  154744 main.go:141] libmachine: Successfully made call to close driver server
I0917 00:12:18.101372  154744 main.go:141] libmachine: Making call to close connection to plugin binary
I0917 00:12:18.101391  154744 main.go:141] libmachine: Making call to close driver server
I0917 00:12:18.101398  154744 main.go:141] libmachine: (functional-456067) Calling .Close
I0917 00:12:18.101742  154744 main.go:141] libmachine: Successfully made call to close driver server
I0917 00:12:18.101761  154744 main.go:141] libmachine: Making call to close connection to plugin binary
I0917 00:12:18.101796  154744 main.go:141] libmachine: (functional-456067) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-456067 image ls --format yaml --alsologtostderr:
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:82ea603ed3cce63f9f870f22299741e0011318391cf722dd924a1d5a9f8ce6f6
- registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.0
size: "76004183"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-456067
size: "4945246"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: 90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:495d3253a47a9a64a62041d518678c8b101fb628488e729d9f52ddff7cf82a86
- registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.0
size: "89050097"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 27e1dbfde6a431288ffe770a97cdb91119202d6942cc7641ad17457fd76ce7b1
repoDigests:
- localhost/minikube-local-cache-test@sha256:6b22c088eec8c05775fe108dede960d132f38cfbd9eb2950af50374147f3065c
repoTags:
- localhost/minikube-local-cache-test:functional-456067
size: "3330"
- id: 46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:31b77e40d737b6d3e3b19b4afd681c9362aef06353075895452fc9a41fe34140
- registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.0
size: "53844823"
- id: df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce
repoDigests:
- registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067
- registry.k8s.io/kube-proxy@sha256:5f71731a5eadcf74f3997dfc159bf5ca36e48c3387c19082fc21871e0dbb19af
repoTags:
- registry.k8s.io/kube-proxy:v1.34.0
size: "73138071"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-456067 image ls --format yaml --alsologtostderr:
I0917 00:12:15.275240  154626 out.go:360] Setting OutFile to fd 1 ...
I0917 00:12:15.275514  154626 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0917 00:12:15.275525  154626 out.go:374] Setting ErrFile to fd 2...
I0917 00:12:15.275529  154626 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0917 00:12:15.275771  154626 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-141589/.minikube/bin
I0917 00:12:15.276380  154626 config.go:182] Loaded profile config "functional-456067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0917 00:12:15.276475  154626 config.go:182] Loaded profile config "functional-456067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0917 00:12:15.276893  154626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0917 00:12:15.276933  154626 main.go:141] libmachine: Launching plugin server for driver kvm2
I0917 00:12:15.290805  154626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41409
I0917 00:12:15.291347  154626 main.go:141] libmachine: () Calling .GetVersion
I0917 00:12:15.291909  154626 main.go:141] libmachine: Using API Version  1
I0917 00:12:15.291931  154626 main.go:141] libmachine: () Calling .SetConfigRaw
I0917 00:12:15.292317  154626 main.go:141] libmachine: () Calling .GetMachineName
I0917 00:12:15.292595  154626 main.go:141] libmachine: (functional-456067) Calling .GetState
I0917 00:12:15.295045  154626 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0917 00:12:15.295088  154626 main.go:141] libmachine: Launching plugin server for driver kvm2
I0917 00:12:15.308534  154626 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39161
I0917 00:12:15.308953  154626 main.go:141] libmachine: () Calling .GetVersion
I0917 00:12:15.309436  154626 main.go:141] libmachine: Using API Version  1
I0917 00:12:15.309462  154626 main.go:141] libmachine: () Calling .SetConfigRaw
I0917 00:12:15.309885  154626 main.go:141] libmachine: () Calling .GetMachineName
I0917 00:12:15.310119  154626 main.go:141] libmachine: (functional-456067) Calling .DriverName
I0917 00:12:15.310379  154626 ssh_runner.go:195] Run: systemctl --version
I0917 00:12:15.310406  154626 main.go:141] libmachine: (functional-456067) Calling .GetSSHHostname
I0917 00:12:15.314393  154626 main.go:141] libmachine: (functional-456067) DBG | domain functional-456067 has defined MAC address 52:54:00:03:de:c7 in network mk-functional-456067
I0917 00:12:15.314934  154626 main.go:141] libmachine: (functional-456067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:de:c7", ip: ""} in network mk-functional-456067: {Iface:virbr2 ExpiryTime:2025-09-17 01:08:00 +0000 UTC Type:0 Mac:52:54:00:03:de:c7 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:functional-456067 Clientid:01:52:54:00:03:de:c7}
I0917 00:12:15.314972  154626 main.go:141] libmachine: (functional-456067) DBG | domain functional-456067 has defined IP address 192.168.50.44 and MAC address 52:54:00:03:de:c7 in network mk-functional-456067
I0917 00:12:15.315191  154626 main.go:141] libmachine: (functional-456067) Calling .GetSSHPort
I0917 00:12:15.315422  154626 main.go:141] libmachine: (functional-456067) Calling .GetSSHKeyPath
I0917 00:12:15.315607  154626 main.go:141] libmachine: (functional-456067) Calling .GetSSHUsername
I0917 00:12:15.315790  154626 sshutil.go:53] new ssh client: &{IP:192.168.50.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21550-141589/.minikube/machines/functional-456067/id_rsa Username:docker}
I0917 00:12:15.394267  154626 ssh_runner.go:195] Run: sudo crictl images --output json
I0917 00:12:15.436647  154626 main.go:141] libmachine: Making call to close driver server
I0917 00:12:15.436659  154626 main.go:141] libmachine: (functional-456067) Calling .Close
I0917 00:12:15.437030  154626 main.go:141] libmachine: Successfully made call to close driver server
I0917 00:12:15.437051  154626 main.go:141] libmachine: Making call to close connection to plugin binary
I0917 00:12:15.437056  154626 main.go:141] libmachine: (functional-456067) DBG | Closing plugin on server side
I0917 00:12:15.437059  154626 main.go:141] libmachine: Making call to close driver server
I0917 00:12:15.437106  154626 main.go:141] libmachine: (functional-456067) Calling .Close
I0917 00:12:15.437349  154626 main.go:141] libmachine: Successfully made call to close driver server
I0917 00:12:15.437364  154626 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-456067 ssh pgrep buildkitd: exit status 1 (192.171854ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 image build -t localhost/my-image:functional-456067 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-456067 image build -t localhost/my-image:functional-456067 testdata/build --alsologtostderr: (2.037648249s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-456067 image build -t localhost/my-image:functional-456067 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 93cc1d90764
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-456067
--> 99d57ab3ebc
Successfully tagged localhost/my-image:functional-456067
99d57ab3ebc355a70f265be625963bbc8548a91610d0f0204a0153dcd49fee68
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-456067 image build -t localhost/my-image:functional-456067 testdata/build --alsologtostderr:
I0917 00:12:15.681827  154680 out.go:360] Setting OutFile to fd 1 ...
I0917 00:12:15.682105  154680 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0917 00:12:15.682115  154680 out.go:374] Setting ErrFile to fd 2...
I0917 00:12:15.682120  154680 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0917 00:12:15.682304  154680 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-141589/.minikube/bin
I0917 00:12:15.682883  154680 config.go:182] Loaded profile config "functional-456067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0917 00:12:15.683537  154680 config.go:182] Loaded profile config "functional-456067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0917 00:12:15.683965  154680 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0917 00:12:15.684025  154680 main.go:141] libmachine: Launching plugin server for driver kvm2
I0917 00:12:15.697635  154680 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41895
I0917 00:12:15.698146  154680 main.go:141] libmachine: () Calling .GetVersion
I0917 00:12:15.698713  154680 main.go:141] libmachine: Using API Version  1
I0917 00:12:15.698766  154680 main.go:141] libmachine: () Calling .SetConfigRaw
I0917 00:12:15.699168  154680 main.go:141] libmachine: () Calling .GetMachineName
I0917 00:12:15.699385  154680 main.go:141] libmachine: (functional-456067) Calling .GetState
I0917 00:12:15.701213  154680 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0917 00:12:15.701254  154680 main.go:141] libmachine: Launching plugin server for driver kvm2
I0917 00:12:15.714811  154680 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45387
I0917 00:12:15.715288  154680 main.go:141] libmachine: () Calling .GetVersion
I0917 00:12:15.715839  154680 main.go:141] libmachine: Using API Version  1
I0917 00:12:15.715884  154680 main.go:141] libmachine: () Calling .SetConfigRaw
I0917 00:12:15.716302  154680 main.go:141] libmachine: () Calling .GetMachineName
I0917 00:12:15.716617  154680 main.go:141] libmachine: (functional-456067) Calling .DriverName
I0917 00:12:15.716898  154680 ssh_runner.go:195] Run: systemctl --version
I0917 00:12:15.716935  154680 main.go:141] libmachine: (functional-456067) Calling .GetSSHHostname
I0917 00:12:15.720047  154680 main.go:141] libmachine: (functional-456067) DBG | domain functional-456067 has defined MAC address 52:54:00:03:de:c7 in network mk-functional-456067
I0917 00:12:15.720453  154680 main.go:141] libmachine: (functional-456067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:de:c7", ip: ""} in network mk-functional-456067: {Iface:virbr2 ExpiryTime:2025-09-17 01:08:00 +0000 UTC Type:0 Mac:52:54:00:03:de:c7 Iaid: IPaddr:192.168.50.44 Prefix:24 Hostname:functional-456067 Clientid:01:52:54:00:03:de:c7}
I0917 00:12:15.720500  154680 main.go:141] libmachine: (functional-456067) DBG | domain functional-456067 has defined IP address 192.168.50.44 and MAC address 52:54:00:03:de:c7 in network mk-functional-456067
I0917 00:12:15.720637  154680 main.go:141] libmachine: (functional-456067) Calling .GetSSHPort
I0917 00:12:15.720816  154680 main.go:141] libmachine: (functional-456067) Calling .GetSSHKeyPath
I0917 00:12:15.720994  154680 main.go:141] libmachine: (functional-456067) Calling .GetSSHUsername
I0917 00:12:15.721152  154680 sshutil.go:53] new ssh client: &{IP:192.168.50.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21550-141589/.minikube/machines/functional-456067/id_rsa Username:docker}
I0917 00:12:15.804228  154680 build_images.go:161] Building image from path: /tmp/build.2345637183.tar
I0917 00:12:15.804294  154680 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0917 00:12:15.817072  154680 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2345637183.tar
I0917 00:12:15.822447  154680 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2345637183.tar: stat -c "%s %y" /var/lib/minikube/build/build.2345637183.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2345637183.tar': No such file or directory
I0917 00:12:15.822488  154680 ssh_runner.go:362] scp /tmp/build.2345637183.tar --> /var/lib/minikube/build/build.2345637183.tar (3072 bytes)
I0917 00:12:15.856746  154680 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2345637183
I0917 00:12:15.871036  154680 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2345637183 -xf /var/lib/minikube/build/build.2345637183.tar
I0917 00:12:15.883260  154680 crio.go:315] Building image: /var/lib/minikube/build/build.2345637183
I0917 00:12:15.883355  154680 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-456067 /var/lib/minikube/build/build.2345637183 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0917 00:12:17.639225  154680 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-456067 /var/lib/minikube/build/build.2345637183 --cgroup-manager=cgroupfs: (1.755834384s)
I0917 00:12:17.639319  154680 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2345637183
I0917 00:12:17.654766  154680 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2345637183.tar
I0917 00:12:17.667334  154680 build_images.go:217] Built localhost/my-image:functional-456067 from /tmp/build.2345637183.tar
I0917 00:12:17.667373  154680 build_images.go:133] succeeded building to: functional-456067
I0917 00:12:17.667378  154680 build_images.go:134] failed building to: 
I0917 00:12:17.667403  154680 main.go:141] libmachine: Making call to close driver server
I0917 00:12:17.667413  154680 main.go:141] libmachine: (functional-456067) Calling .Close
I0917 00:12:17.667737  154680 main.go:141] libmachine: Successfully made call to close driver server
I0917 00:12:17.667755  154680 main.go:141] libmachine: Making call to close connection to plugin binary
I0917 00:12:17.667763  154680 main.go:141] libmachine: Making call to close driver server
I0917 00:12:17.667770  154680 main.go:141] libmachine: (functional-456067) Calling .Close
I0917 00:12:17.668062  154680 main.go:141] libmachine: (functional-456067) DBG | Closing plugin on server side
I0917 00:12:17.668086  154680 main.go:141] libmachine: Successfully made call to close driver server
I0917 00:12:17.668109  154680 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-456067
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 image load --daemon kicbase/echo-server:functional-456067 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-456067 image load --daemon kicbase/echo-server:functional-456067 --alsologtostderr: (1.643328612s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.96s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "417.698426ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "53.873871ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "401.478728ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "61.83842ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 image load --daemon kicbase/echo-server:functional-456067 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-456067
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 image load --daemon kicbase/echo-server:functional-456067 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (7.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 image save kicbase/echo-server:functional-456067 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:395: (dbg) Done: out/minikube-linux-amd64 -p functional-456067 image save kicbase/echo-server:functional-456067 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (7.471990345s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (7.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 image rm kicbase/echo-server:functional-456067 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-456067
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 image save --daemon kicbase/echo-server:functional-456067 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-456067
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (57.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-456067 /tmp/TestFunctionalparallelMountCmdany-port1000287547/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1758067873373867933" to /tmp/TestFunctionalparallelMountCmdany-port1000287547/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1758067873373867933" to /tmp/TestFunctionalparallelMountCmdany-port1000287547/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1758067873373867933" to /tmp/TestFunctionalparallelMountCmdany-port1000287547/001/test-1758067873373867933
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-456067 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (196.175318ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0917 00:11:13.570376  145530 retry.go:31] will retry after 605.873062ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 17 00:11 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 17 00:11 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 17 00:11 test-1758067873373867933
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 ssh cat /mount-9p/test-1758067873373867933
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-456067 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [6736e2e9-c999-4357-b65c-6e99190f152c] Pending
helpers_test.go:352: "busybox-mount" [6736e2e9-c999-4357-b65c-6e99190f152c] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
E0917 00:11:53.405649  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/addons-772113/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox-mount" [6736e2e9-c999-4357-b65c-6e99190f152c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [6736e2e9-c999-4357-b65c-6e99190f152c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 55.003281367s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-456067 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-456067 /tmp/TestFunctionalparallelMountCmdany-port1000287547/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (57.51s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-456067 /tmp/TestFunctionalparallelMountCmdspecific-port3572376631/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-456067 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (187.868993ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0917 00:12:11.068380  145530 retry.go:31] will retry after 643.834625ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-456067 /tmp/TestFunctionalparallelMountCmdspecific-port3572376631/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-456067 ssh "sudo umount -f /mount-9p": exit status 1 (189.351197ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-456067 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-456067 /tmp/TestFunctionalparallelMountCmdspecific-port3572376631/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.81s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-456067 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3539378027/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-456067 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3539378027/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-456067 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3539378027/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-456067 ssh "findmnt -T" /mount1: exit status 1 (212.988074ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0917 00:12:12.904987  145530 retry.go:31] will retry after 418.334316ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-456067 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-456067 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3539378027/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-456067 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3539378027/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-456067 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3539378027/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-456067 service list: (1.249828352s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-456067 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-456067 service list -o json: (1.237428203s)
functional_test.go:1504: Took "1.237549204s" to run "out/minikube-linux-amd64 -p functional-456067 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.24s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-456067
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-456067
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-456067
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (243.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E0917 00:21:53.407040  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/addons-772113/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:23:16.476165  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/addons-772113/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-538805 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (4m3.025322815s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (243.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-538805 kubectl -- rollout status deployment/busybox: (3.080961202s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 kubectl -- exec busybox-7b57f96db7-5thcv -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 kubectl -- exec busybox-7b57f96db7-bc5n2 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 kubectl -- exec busybox-7b57f96db7-dhbs9 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 kubectl -- exec busybox-7b57f96db7-5thcv -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 kubectl -- exec busybox-7b57f96db7-bc5n2 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 kubectl -- exec busybox-7b57f96db7-dhbs9 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 kubectl -- exec busybox-7b57f96db7-5thcv -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 kubectl -- exec busybox-7b57f96db7-bc5n2 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 kubectl -- exec busybox-7b57f96db7-dhbs9 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 kubectl -- exec busybox-7b57f96db7-5thcv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 kubectl -- exec busybox-7b57f96db7-5thcv -- sh -c "ping -c 1 192.168.50.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 kubectl -- exec busybox-7b57f96db7-bc5n2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 kubectl -- exec busybox-7b57f96db7-bc5n2 -- sh -c "ping -c 1 192.168.50.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 kubectl -- exec busybox-7b57f96db7-dhbs9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 kubectl -- exec busybox-7b57f96db7-dhbs9 -- sh -c "ping -c 1 192.168.50.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (47.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 node add --alsologtostderr -v 5
E0917 00:25:50.331362  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/functional-456067/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:25:50.337977  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/functional-456067/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:25:50.349550  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/functional-456067/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:25:50.371105  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/functional-456067/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:25:50.412668  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/functional-456067/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:25:50.494325  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/functional-456067/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:25:50.655998  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/functional-456067/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:25:50.977789  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/functional-456067/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:25:51.619920  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/functional-456067/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:25:52.902213  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/functional-456067/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:25:55.464174  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/functional-456067/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:26:00.585693  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/functional-456067/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-538805 node add --alsologtostderr -v 5: (46.913965845s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 status --alsologtostderr -v 5
E0917 00:26:10.827022  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/functional-456067/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (47.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-538805 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 cp testdata/cp-test.txt ha-538805:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 ssh -n ha-538805 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 cp ha-538805:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2191738518/001/cp-test_ha-538805.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 ssh -n ha-538805 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 cp ha-538805:/home/docker/cp-test.txt ha-538805-m02:/home/docker/cp-test_ha-538805_ha-538805-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 ssh -n ha-538805 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 ssh -n ha-538805-m02 "sudo cat /home/docker/cp-test_ha-538805_ha-538805-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 cp ha-538805:/home/docker/cp-test.txt ha-538805-m03:/home/docker/cp-test_ha-538805_ha-538805-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 ssh -n ha-538805 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 ssh -n ha-538805-m03 "sudo cat /home/docker/cp-test_ha-538805_ha-538805-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 cp ha-538805:/home/docker/cp-test.txt ha-538805-m04:/home/docker/cp-test_ha-538805_ha-538805-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 ssh -n ha-538805 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 ssh -n ha-538805-m04 "sudo cat /home/docker/cp-test_ha-538805_ha-538805-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 cp testdata/cp-test.txt ha-538805-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 ssh -n ha-538805-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 cp ha-538805-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2191738518/001/cp-test_ha-538805-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 ssh -n ha-538805-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 cp ha-538805-m02:/home/docker/cp-test.txt ha-538805:/home/docker/cp-test_ha-538805-m02_ha-538805.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 ssh -n ha-538805-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 ssh -n ha-538805 "sudo cat /home/docker/cp-test_ha-538805-m02_ha-538805.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 cp ha-538805-m02:/home/docker/cp-test.txt ha-538805-m03:/home/docker/cp-test_ha-538805-m02_ha-538805-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 ssh -n ha-538805-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 ssh -n ha-538805-m03 "sudo cat /home/docker/cp-test_ha-538805-m02_ha-538805-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 cp ha-538805-m02:/home/docker/cp-test.txt ha-538805-m04:/home/docker/cp-test_ha-538805-m02_ha-538805-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 ssh -n ha-538805-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 ssh -n ha-538805-m04 "sudo cat /home/docker/cp-test_ha-538805-m02_ha-538805-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 cp testdata/cp-test.txt ha-538805-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 ssh -n ha-538805-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 cp ha-538805-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2191738518/001/cp-test_ha-538805-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 ssh -n ha-538805-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 cp ha-538805-m03:/home/docker/cp-test.txt ha-538805:/home/docker/cp-test_ha-538805-m03_ha-538805.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 ssh -n ha-538805-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 ssh -n ha-538805 "sudo cat /home/docker/cp-test_ha-538805-m03_ha-538805.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 cp ha-538805-m03:/home/docker/cp-test.txt ha-538805-m02:/home/docker/cp-test_ha-538805-m03_ha-538805-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 ssh -n ha-538805-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 ssh -n ha-538805-m02 "sudo cat /home/docker/cp-test_ha-538805-m03_ha-538805-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 cp ha-538805-m03:/home/docker/cp-test.txt ha-538805-m04:/home/docker/cp-test_ha-538805-m03_ha-538805-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 ssh -n ha-538805-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 ssh -n ha-538805-m04 "sudo cat /home/docker/cp-test_ha-538805-m03_ha-538805-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 cp testdata/cp-test.txt ha-538805-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 ssh -n ha-538805-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 cp ha-538805-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2191738518/001/cp-test_ha-538805-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 ssh -n ha-538805-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 cp ha-538805-m04:/home/docker/cp-test.txt ha-538805:/home/docker/cp-test_ha-538805-m04_ha-538805.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 ssh -n ha-538805-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 ssh -n ha-538805 "sudo cat /home/docker/cp-test_ha-538805-m04_ha-538805.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 cp ha-538805-m04:/home/docker/cp-test.txt ha-538805-m02:/home/docker/cp-test_ha-538805-m04_ha-538805-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 ssh -n ha-538805-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 ssh -n ha-538805-m02 "sudo cat /home/docker/cp-test_ha-538805-m04_ha-538805-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 cp ha-538805-m04:/home/docker/cp-test.txt ha-538805-m03:/home/docker/cp-test_ha-538805-m04_ha-538805-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 ssh -n ha-538805-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 ssh -n ha-538805-m03 "sudo cat /home/docker/cp-test_ha-538805-m04_ha-538805-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (87.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 node stop m02 --alsologtostderr -v 5
E0917 00:26:31.308388  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/functional-456067/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:26:53.407057  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/addons-772113/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:27:12.271027  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/functional-456067/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-538805 node stop m02 --alsologtostderr -v 5: (1m26.353863072s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-538805 status --alsologtostderr -v 5: exit status 7 (686.623335ms)

                                                
                                                
-- stdout --
	ha-538805
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-538805-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-538805-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-538805-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 00:27:52.676284  162221 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:27:52.676561  162221 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:27:52.676571  162221 out.go:374] Setting ErrFile to fd 2...
	I0917 00:27:52.676576  162221 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:27:52.676759  162221 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-141589/.minikube/bin
	I0917 00:27:52.676945  162221 out.go:368] Setting JSON to false
	I0917 00:27:52.676968  162221 mustload.go:65] Loading cluster: ha-538805
	I0917 00:27:52.677048  162221 notify.go:220] Checking for updates...
	I0917 00:27:52.677515  162221 config.go:182] Loaded profile config "ha-538805": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:27:52.677543  162221 status.go:174] checking status of ha-538805 ...
	I0917 00:27:52.678117  162221 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 00:27:52.678159  162221 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 00:27:52.699031  162221 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33235
	I0917 00:27:52.699656  162221 main.go:141] libmachine: () Calling .GetVersion
	I0917 00:27:52.700326  162221 main.go:141] libmachine: Using API Version  1
	I0917 00:27:52.700358  162221 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 00:27:52.700774  162221 main.go:141] libmachine: () Calling .GetMachineName
	I0917 00:27:52.701030  162221 main.go:141] libmachine: (ha-538805) Calling .GetState
	I0917 00:27:52.703184  162221 status.go:371] ha-538805 host status = "Running" (err=<nil>)
	I0917 00:27:52.703214  162221 host.go:66] Checking if "ha-538805" exists ...
	I0917 00:27:52.703567  162221 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 00:27:52.703616  162221 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 00:27:52.720759  162221 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43973
	I0917 00:27:52.721253  162221 main.go:141] libmachine: () Calling .GetVersion
	I0917 00:27:52.721772  162221 main.go:141] libmachine: Using API Version  1
	I0917 00:27:52.721795  162221 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 00:27:52.722155  162221 main.go:141] libmachine: () Calling .GetMachineName
	I0917 00:27:52.722399  162221 main.go:141] libmachine: (ha-538805) Calling .GetIP
	I0917 00:27:52.726005  162221 main.go:141] libmachine: (ha-538805) DBG | domain ha-538805 has defined MAC address 52:54:00:75:0a:29 in network mk-ha-538805
	I0917 00:27:52.726713  162221 main.go:141] libmachine: (ha-538805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:0a:29", ip: ""} in network mk-ha-538805: {Iface:virbr2 ExpiryTime:2025-09-17 01:21:29 +0000 UTC Type:0 Mac:52:54:00:75:0a:29 Iaid: IPaddr:192.168.50.148 Prefix:24 Hostname:ha-538805 Clientid:01:52:54:00:75:0a:29}
	I0917 00:27:52.726740  162221 main.go:141] libmachine: (ha-538805) DBG | domain ha-538805 has defined IP address 192.168.50.148 and MAC address 52:54:00:75:0a:29 in network mk-ha-538805
	I0917 00:27:52.726988  162221 host.go:66] Checking if "ha-538805" exists ...
	I0917 00:27:52.727290  162221 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 00:27:52.727336  162221 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 00:27:52.741342  162221 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38473
	I0917 00:27:52.741899  162221 main.go:141] libmachine: () Calling .GetVersion
	I0917 00:27:52.742465  162221 main.go:141] libmachine: Using API Version  1
	I0917 00:27:52.742489  162221 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 00:27:52.742830  162221 main.go:141] libmachine: () Calling .GetMachineName
	I0917 00:27:52.743034  162221 main.go:141] libmachine: (ha-538805) Calling .DriverName
	I0917 00:27:52.743328  162221 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:27:52.743360  162221 main.go:141] libmachine: (ha-538805) Calling .GetSSHHostname
	I0917 00:27:52.747132  162221 main.go:141] libmachine: (ha-538805) DBG | domain ha-538805 has defined MAC address 52:54:00:75:0a:29 in network mk-ha-538805
	I0917 00:27:52.747671  162221 main.go:141] libmachine: (ha-538805) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:0a:29", ip: ""} in network mk-ha-538805: {Iface:virbr2 ExpiryTime:2025-09-17 01:21:29 +0000 UTC Type:0 Mac:52:54:00:75:0a:29 Iaid: IPaddr:192.168.50.148 Prefix:24 Hostname:ha-538805 Clientid:01:52:54:00:75:0a:29}
	I0917 00:27:52.747696  162221 main.go:141] libmachine: (ha-538805) DBG | domain ha-538805 has defined IP address 192.168.50.148 and MAC address 52:54:00:75:0a:29 in network mk-ha-538805
	I0917 00:27:52.747908  162221 main.go:141] libmachine: (ha-538805) Calling .GetSSHPort
	I0917 00:27:52.748118  162221 main.go:141] libmachine: (ha-538805) Calling .GetSSHKeyPath
	I0917 00:27:52.748309  162221 main.go:141] libmachine: (ha-538805) Calling .GetSSHUsername
	I0917 00:27:52.748482  162221 sshutil.go:53] new ssh client: &{IP:192.168.50.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21550-141589/.minikube/machines/ha-538805/id_rsa Username:docker}
	I0917 00:27:52.843252  162221 ssh_runner.go:195] Run: systemctl --version
	I0917 00:27:52.850675  162221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:27:52.869819  162221 kubeconfig.go:125] found "ha-538805" server: "https://192.168.50.254:8443"
	I0917 00:27:52.869886  162221 api_server.go:166] Checking apiserver status ...
	I0917 00:27:52.869943  162221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:27:52.892260  162221 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1465/cgroup
	W0917 00:27:52.904485  162221 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1465/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:27:52.904604  162221 ssh_runner.go:195] Run: ls
	I0917 00:27:52.911314  162221 api_server.go:253] Checking apiserver healthz at https://192.168.50.254:8443/healthz ...
	I0917 00:27:52.916559  162221 api_server.go:279] https://192.168.50.254:8443/healthz returned 200:
	ok
	I0917 00:27:52.916608  162221 status.go:463] ha-538805 apiserver status = Running (err=<nil>)
	I0917 00:27:52.916642  162221 status.go:176] ha-538805 status: &{Name:ha-538805 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:27:52.916679  162221 status.go:174] checking status of ha-538805-m02 ...
	I0917 00:27:52.917191  162221 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 00:27:52.917242  162221 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 00:27:52.931570  162221 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44059
	I0917 00:27:52.932163  162221 main.go:141] libmachine: () Calling .GetVersion
	I0917 00:27:52.932704  162221 main.go:141] libmachine: Using API Version  1
	I0917 00:27:52.932725  162221 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 00:27:52.933274  162221 main.go:141] libmachine: () Calling .GetMachineName
	I0917 00:27:52.933512  162221 main.go:141] libmachine: (ha-538805-m02) Calling .GetState
	I0917 00:27:52.935145  162221 status.go:371] ha-538805-m02 host status = "Stopped" (err=<nil>)
	I0917 00:27:52.935162  162221 status.go:384] host is not running, skipping remaining checks
	I0917 00:27:52.935170  162221 status.go:176] ha-538805-m02 status: &{Name:ha-538805-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:27:52.935193  162221 status.go:174] checking status of ha-538805-m03 ...
	I0917 00:27:52.935485  162221 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 00:27:52.935543  162221 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 00:27:52.949233  162221 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44057
	I0917 00:27:52.949776  162221 main.go:141] libmachine: () Calling .GetVersion
	I0917 00:27:52.950314  162221 main.go:141] libmachine: Using API Version  1
	I0917 00:27:52.950336  162221 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 00:27:52.950690  162221 main.go:141] libmachine: () Calling .GetMachineName
	I0917 00:27:52.951015  162221 main.go:141] libmachine: (ha-538805-m03) Calling .GetState
	I0917 00:27:52.952949  162221 status.go:371] ha-538805-m03 host status = "Running" (err=<nil>)
	I0917 00:27:52.952970  162221 host.go:66] Checking if "ha-538805-m03" exists ...
	I0917 00:27:52.953276  162221 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 00:27:52.953319  162221 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 00:27:52.967500  162221 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37723
	I0917 00:27:52.968065  162221 main.go:141] libmachine: () Calling .GetVersion
	I0917 00:27:52.968552  162221 main.go:141] libmachine: Using API Version  1
	I0917 00:27:52.968579  162221 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 00:27:52.969010  162221 main.go:141] libmachine: () Calling .GetMachineName
	I0917 00:27:52.969298  162221 main.go:141] libmachine: (ha-538805-m03) Calling .GetIP
	I0917 00:27:52.972947  162221 main.go:141] libmachine: (ha-538805-m03) DBG | domain ha-538805-m03 has defined MAC address 52:54:00:4b:7b:2b in network mk-ha-538805
	I0917 00:27:52.973418  162221 main.go:141] libmachine: (ha-538805-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:7b:2b", ip: ""} in network mk-ha-538805: {Iface:virbr2 ExpiryTime:2025-09-17 01:23:38 +0000 UTC Type:0 Mac:52:54:00:4b:7b:2b Iaid: IPaddr:192.168.50.131 Prefix:24 Hostname:ha-538805-m03 Clientid:01:52:54:00:4b:7b:2b}
	I0917 00:27:52.973464  162221 main.go:141] libmachine: (ha-538805-m03) DBG | domain ha-538805-m03 has defined IP address 192.168.50.131 and MAC address 52:54:00:4b:7b:2b in network mk-ha-538805
	I0917 00:27:52.973642  162221 host.go:66] Checking if "ha-538805-m03" exists ...
	I0917 00:27:52.973966  162221 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 00:27:52.974005  162221 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 00:27:52.988237  162221 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33255
	I0917 00:27:52.988745  162221 main.go:141] libmachine: () Calling .GetVersion
	I0917 00:27:52.989278  162221 main.go:141] libmachine: Using API Version  1
	I0917 00:27:52.989311  162221 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 00:27:52.989726  162221 main.go:141] libmachine: () Calling .GetMachineName
	I0917 00:27:52.989952  162221 main.go:141] libmachine: (ha-538805-m03) Calling .DriverName
	I0917 00:27:52.990143  162221 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:27:52.990163  162221 main.go:141] libmachine: (ha-538805-m03) Calling .GetSSHHostname
	I0917 00:27:52.993454  162221 main.go:141] libmachine: (ha-538805-m03) DBG | domain ha-538805-m03 has defined MAC address 52:54:00:4b:7b:2b in network mk-ha-538805
	I0917 00:27:52.993974  162221 main.go:141] libmachine: (ha-538805-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:7b:2b", ip: ""} in network mk-ha-538805: {Iface:virbr2 ExpiryTime:2025-09-17 01:23:38 +0000 UTC Type:0 Mac:52:54:00:4b:7b:2b Iaid: IPaddr:192.168.50.131 Prefix:24 Hostname:ha-538805-m03 Clientid:01:52:54:00:4b:7b:2b}
	I0917 00:27:52.994010  162221 main.go:141] libmachine: (ha-538805-m03) DBG | domain ha-538805-m03 has defined IP address 192.168.50.131 and MAC address 52:54:00:4b:7b:2b in network mk-ha-538805
	I0917 00:27:52.994189  162221 main.go:141] libmachine: (ha-538805-m03) Calling .GetSSHPort
	I0917 00:27:52.994369  162221 main.go:141] libmachine: (ha-538805-m03) Calling .GetSSHKeyPath
	I0917 00:27:52.994565  162221 main.go:141] libmachine: (ha-538805-m03) Calling .GetSSHUsername
	I0917 00:27:52.994717  162221 sshutil.go:53] new ssh client: &{IP:192.168.50.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21550-141589/.minikube/machines/ha-538805-m03/id_rsa Username:docker}
	I0917 00:27:53.079693  162221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:27:53.099962  162221 kubeconfig.go:125] found "ha-538805" server: "https://192.168.50.254:8443"
	I0917 00:27:53.099994  162221 api_server.go:166] Checking apiserver status ...
	I0917 00:27:53.100036  162221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:27:53.121570  162221 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1816/cgroup
	W0917 00:27:53.134044  162221 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1816/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:27:53.134108  162221 ssh_runner.go:195] Run: ls
	I0917 00:27:53.139443  162221 api_server.go:253] Checking apiserver healthz at https://192.168.50.254:8443/healthz ...
	I0917 00:27:53.144756  162221 api_server.go:279] https://192.168.50.254:8443/healthz returned 200:
	ok
	I0917 00:27:53.144784  162221 status.go:463] ha-538805-m03 apiserver status = Running (err=<nil>)
	I0917 00:27:53.144796  162221 status.go:176] ha-538805-m03 status: &{Name:ha-538805-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:27:53.144815  162221 status.go:174] checking status of ha-538805-m04 ...
	I0917 00:27:53.145148  162221 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 00:27:53.145188  162221 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 00:27:53.159105  162221 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39029
	I0917 00:27:53.159702  162221 main.go:141] libmachine: () Calling .GetVersion
	I0917 00:27:53.160287  162221 main.go:141] libmachine: Using API Version  1
	I0917 00:27:53.160311  162221 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 00:27:53.160697  162221 main.go:141] libmachine: () Calling .GetMachineName
	I0917 00:27:53.160950  162221 main.go:141] libmachine: (ha-538805-m04) Calling .GetState
	I0917 00:27:53.162560  162221 status.go:371] ha-538805-m04 host status = "Running" (err=<nil>)
	I0917 00:27:53.162577  162221 host.go:66] Checking if "ha-538805-m04" exists ...
	I0917 00:27:53.162881  162221 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 00:27:53.162943  162221 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 00:27:53.176647  162221 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34339
	I0917 00:27:53.177140  162221 main.go:141] libmachine: () Calling .GetVersion
	I0917 00:27:53.177635  162221 main.go:141] libmachine: Using API Version  1
	I0917 00:27:53.177659  162221 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 00:27:53.178117  162221 main.go:141] libmachine: () Calling .GetMachineName
	I0917 00:27:53.178388  162221 main.go:141] libmachine: (ha-538805-m04) Calling .GetIP
	I0917 00:27:53.182251  162221 main.go:141] libmachine: (ha-538805-m04) DBG | domain ha-538805-m04 has defined MAC address 52:54:00:3f:a2:62 in network mk-ha-538805
	I0917 00:27:53.182750  162221 main.go:141] libmachine: (ha-538805-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a2:62", ip: ""} in network mk-ha-538805: {Iface:virbr2 ExpiryTime:2025-09-17 01:25:40 +0000 UTC Type:0 Mac:52:54:00:3f:a2:62 Iaid: IPaddr:192.168.50.243 Prefix:24 Hostname:ha-538805-m04 Clientid:01:52:54:00:3f:a2:62}
	I0917 00:27:53.182804  162221 main.go:141] libmachine: (ha-538805-m04) DBG | domain ha-538805-m04 has defined IP address 192.168.50.243 and MAC address 52:54:00:3f:a2:62 in network mk-ha-538805
	I0917 00:27:53.182995  162221 host.go:66] Checking if "ha-538805-m04" exists ...
	I0917 00:27:53.183374  162221 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 00:27:53.183431  162221 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 00:27:53.198411  162221 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37959
	I0917 00:27:53.198994  162221 main.go:141] libmachine: () Calling .GetVersion
	I0917 00:27:53.199510  162221 main.go:141] libmachine: Using API Version  1
	I0917 00:27:53.199537  162221 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 00:27:53.199902  162221 main.go:141] libmachine: () Calling .GetMachineName
	I0917 00:27:53.200098  162221 main.go:141] libmachine: (ha-538805-m04) Calling .DriverName
	I0917 00:27:53.200359  162221 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:27:53.200386  162221 main.go:141] libmachine: (ha-538805-m04) Calling .GetSSHHostname
	I0917 00:27:53.203378  162221 main.go:141] libmachine: (ha-538805-m04) DBG | domain ha-538805-m04 has defined MAC address 52:54:00:3f:a2:62 in network mk-ha-538805
	I0917 00:27:53.203952  162221 main.go:141] libmachine: (ha-538805-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a2:62", ip: ""} in network mk-ha-538805: {Iface:virbr2 ExpiryTime:2025-09-17 01:25:40 +0000 UTC Type:0 Mac:52:54:00:3f:a2:62 Iaid: IPaddr:192.168.50.243 Prefix:24 Hostname:ha-538805-m04 Clientid:01:52:54:00:3f:a2:62}
	I0917 00:27:53.204000  162221 main.go:141] libmachine: (ha-538805-m04) DBG | domain ha-538805-m04 has defined IP address 192.168.50.243 and MAC address 52:54:00:3f:a2:62 in network mk-ha-538805
	I0917 00:27:53.204122  162221 main.go:141] libmachine: (ha-538805-m04) Calling .GetSSHPort
	I0917 00:27:53.204346  162221 main.go:141] libmachine: (ha-538805-m04) Calling .GetSSHKeyPath
	I0917 00:27:53.204509  162221 main.go:141] libmachine: (ha-538805-m04) Calling .GetSSHUsername
	I0917 00:27:53.204632  162221 sshutil.go:53] new ssh client: &{IP:192.168.50.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21550-141589/.minikube/machines/ha-538805-m04/id_rsa Username:docker}
	I0917 00:27:53.289626  162221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:27:53.310575  162221 status.go:176] ha-538805-m04 status: &{Name:ha-538805-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (87.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (36.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-538805 node start m02 --alsologtostderr -v 5: (35.838371195s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-538805 status --alsologtostderr -v 5: (1.011488139s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (36.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (377.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 stop --alsologtostderr -v 5
E0917 00:28:34.193154  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/functional-456067/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:30:50.330683  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/functional-456067/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:31:18.035581  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/functional-456067/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:31:53.408024  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/addons-772113/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-538805 stop --alsologtostderr -v 5: (4m4.204215945s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 start --wait true --alsologtostderr -v 5
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-538805 start --wait true --alsologtostderr -v 5: (2m13.152401636s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (377.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-538805 node delete m03 --alsologtostderr -v 5: (17.834464816s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (260.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 stop --alsologtostderr -v 5
E0917 00:35:50.336182  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/functional-456067/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:36:53.407666  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/addons-772113/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-538805 stop --alsologtostderr -v 5: (4m20.580545144s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-538805 status --alsologtostderr -v 5: exit status 7 (113.654563ms)

                                                
                                                
-- stdout --
	ha-538805
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-538805-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-538805-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 00:39:29.316251  166198 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:39:29.316606  166198 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:39:29.316618  166198 out.go:374] Setting ErrFile to fd 2...
	I0917 00:39:29.316624  166198 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:39:29.316819  166198 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-141589/.minikube/bin
	I0917 00:39:29.317074  166198 out.go:368] Setting JSON to false
	I0917 00:39:29.317101  166198 mustload.go:65] Loading cluster: ha-538805
	I0917 00:39:29.317359  166198 notify.go:220] Checking for updates...
	I0917 00:39:29.317536  166198 config.go:182] Loaded profile config "ha-538805": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:39:29.317574  166198 status.go:174] checking status of ha-538805 ...
	I0917 00:39:29.318100  166198 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 00:39:29.318148  166198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 00:39:29.340635  166198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44115
	I0917 00:39:29.341126  166198 main.go:141] libmachine: () Calling .GetVersion
	I0917 00:39:29.341671  166198 main.go:141] libmachine: Using API Version  1
	I0917 00:39:29.341695  166198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 00:39:29.342201  166198 main.go:141] libmachine: () Calling .GetMachineName
	I0917 00:39:29.342446  166198 main.go:141] libmachine: (ha-538805) Calling .GetState
	I0917 00:39:29.344566  166198 status.go:371] ha-538805 host status = "Stopped" (err=<nil>)
	I0917 00:39:29.344582  166198 status.go:384] host is not running, skipping remaining checks
	I0917 00:39:29.344587  166198 status.go:176] ha-538805 status: &{Name:ha-538805 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:39:29.344607  166198 status.go:174] checking status of ha-538805-m02 ...
	I0917 00:39:29.344946  166198 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 00:39:29.345008  166198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 00:39:29.358745  166198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46211
	I0917 00:39:29.359165  166198 main.go:141] libmachine: () Calling .GetVersion
	I0917 00:39:29.359595  166198 main.go:141] libmachine: Using API Version  1
	I0917 00:39:29.359619  166198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 00:39:29.360021  166198 main.go:141] libmachine: () Calling .GetMachineName
	I0917 00:39:29.360220  166198 main.go:141] libmachine: (ha-538805-m02) Calling .GetState
	I0917 00:39:29.362130  166198 status.go:371] ha-538805-m02 host status = "Stopped" (err=<nil>)
	I0917 00:39:29.362148  166198 status.go:384] host is not running, skipping remaining checks
	I0917 00:39:29.362156  166198 status.go:176] ha-538805-m02 status: &{Name:ha-538805-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:39:29.362191  166198 status.go:174] checking status of ha-538805-m04 ...
	I0917 00:39:29.362464  166198 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 00:39:29.362509  166198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 00:39:29.375460  166198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39695
	I0917 00:39:29.376117  166198 main.go:141] libmachine: () Calling .GetVersion
	I0917 00:39:29.376681  166198 main.go:141] libmachine: Using API Version  1
	I0917 00:39:29.376704  166198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 00:39:29.377097  166198 main.go:141] libmachine: () Calling .GetMachineName
	I0917 00:39:29.377296  166198 main.go:141] libmachine: (ha-538805-m04) Calling .GetState
	I0917 00:39:29.379182  166198 status.go:371] ha-538805-m04 host status = "Stopped" (err=<nil>)
	I0917 00:39:29.379202  166198 status.go:384] host is not running, skipping remaining checks
	I0917 00:39:29.379210  166198 status.go:176] ha-538805-m04 status: &{Name:ha-538805-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (260.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (98.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E0917 00:39:56.477706  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/addons-772113/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:40:50.331012  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/functional-456067/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-538805 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m37.914850488s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (98.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (81.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 node add --control-plane --alsologtostderr -v 5
E0917 00:41:53.408257  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/addons-772113/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:42:13.397831  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/functional-456067/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-538805 node add --control-plane --alsologtostderr -v 5: (1m20.609016884s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-538805 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (81.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.97s)

                                                
                                    
x
+
TestJSONOutput/start/Command (77.89s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-818273 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-818273 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m17.88561614s)
--- PASS: TestJSONOutput/start/Command (77.89s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.8s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-818273 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.80s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.72s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-818273 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.72s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (8s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-818273 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-818273 --output=json --user=testUser: (7.999162516s)
--- PASS: TestJSONOutput/stop/Command (8.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-128468 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-128468 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (69.5522ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f62e82c8-8e42-4fe6-aacf-959401314cdc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-128468] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"62b6f13f-4720-4b4b-95c0-4753d17aa3f7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21550"}}
	{"specversion":"1.0","id":"7d5ee25d-4e5c-4370-b576-8e56964a527f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"11644f0a-8cf3-4653-8e48-e27765fc2fc8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21550-141589/kubeconfig"}}
	{"specversion":"1.0","id":"0f23bebd-8e20-4fca-8dbc-1de115f8ca3e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-141589/.minikube"}}
	{"specversion":"1.0","id":"7b6ca6d0-3741-4210-942c-1f7668c9f9e7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"0f05acb9-8b8d-4cec-a6f6-81502aa7db2b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"67522101-af99-4058-a010-d4cc193d8432","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-128468" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-128468
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (86.79s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-535881 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-535881 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (42.010384353s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-547962 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-547962 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (41.886852476s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-535881
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-547962
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-547962" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-547962
helpers_test.go:175: Cleaning up "first-535881" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-535881
--- PASS: TestMinikubeProfile (86.79s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (21.03s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-089243 --memory=3072 --mount-string /tmp/TestMountStartserial611004409/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-089243 --memory=3072 --mount-string /tmp/TestMountStartserial611004409/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (20.029726264s)
E0917 00:45:50.331252  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/functional-456067/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMountStart/serial/StartWithMountFirst (21.03s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-089243 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-089243 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.41s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (23.23s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-118723 --memory=3072 --mount-string /tmp/TestMountStartserial611004409/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-118723 --memory=3072 --mount-string /tmp/TestMountStartserial611004409/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (22.224829276s)
--- PASS: TestMountStart/serial/StartWithMountSecond (23.23s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-118723 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-118723 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.73s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-089243 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-118723 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-118723 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.32s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-118723
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-118723: (1.316568739s)
--- PASS: TestMountStart/serial/Stop (1.32s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (20.08s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-118723
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-118723: (19.079458584s)
--- PASS: TestMountStart/serial/RestartStopped (20.08s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-118723 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-118723 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (130.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-989933 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E0917 00:46:53.407397  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/addons-772113/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-989933 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (2m10.490480682s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989933 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (130.94s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-989933 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-989933 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-989933 -- rollout status deployment/busybox: (2.33877533s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-989933 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-989933 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-989933 -- exec busybox-7b57f96db7-5gpfz -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-989933 -- exec busybox-7b57f96db7-ntj7r -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-989933 -- exec busybox-7b57f96db7-5gpfz -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-989933 -- exec busybox-7b57f96db7-ntj7r -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-989933 -- exec busybox-7b57f96db7-5gpfz -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-989933 -- exec busybox-7b57f96db7-ntj7r -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.96s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-989933 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-989933 -- exec busybox-7b57f96db7-5gpfz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-989933 -- exec busybox-7b57f96db7-5gpfz -- sh -c "ping -c 1 192.168.50.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-989933 -- exec busybox-7b57f96db7-ntj7r -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-989933 -- exec busybox-7b57f96db7-ntj7r -- sh -c "ping -c 1 192.168.50.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.81s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (45.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-989933 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-989933 -v=5 --alsologtostderr: (45.028788498s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989933 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (45.64s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-989933 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.62s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989933 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989933 cp testdata/cp-test.txt multinode-989933:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989933 ssh -n multinode-989933 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989933 cp multinode-989933:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2278569323/001/cp-test_multinode-989933.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989933 ssh -n multinode-989933 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989933 cp multinode-989933:/home/docker/cp-test.txt multinode-989933-m02:/home/docker/cp-test_multinode-989933_multinode-989933-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989933 ssh -n multinode-989933 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989933 ssh -n multinode-989933-m02 "sudo cat /home/docker/cp-test_multinode-989933_multinode-989933-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989933 cp multinode-989933:/home/docker/cp-test.txt multinode-989933-m03:/home/docker/cp-test_multinode-989933_multinode-989933-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989933 ssh -n multinode-989933 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989933 ssh -n multinode-989933-m03 "sudo cat /home/docker/cp-test_multinode-989933_multinode-989933-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989933 cp testdata/cp-test.txt multinode-989933-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989933 ssh -n multinode-989933-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989933 cp multinode-989933-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2278569323/001/cp-test_multinode-989933-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989933 ssh -n multinode-989933-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989933 cp multinode-989933-m02:/home/docker/cp-test.txt multinode-989933:/home/docker/cp-test_multinode-989933-m02_multinode-989933.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989933 ssh -n multinode-989933-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989933 ssh -n multinode-989933 "sudo cat /home/docker/cp-test_multinode-989933-m02_multinode-989933.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989933 cp multinode-989933-m02:/home/docker/cp-test.txt multinode-989933-m03:/home/docker/cp-test_multinode-989933-m02_multinode-989933-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989933 ssh -n multinode-989933-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989933 ssh -n multinode-989933-m03 "sudo cat /home/docker/cp-test_multinode-989933-m02_multinode-989933-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989933 cp testdata/cp-test.txt multinode-989933-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989933 ssh -n multinode-989933-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989933 cp multinode-989933-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2278569323/001/cp-test_multinode-989933-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989933 ssh -n multinode-989933-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989933 cp multinode-989933-m03:/home/docker/cp-test.txt multinode-989933:/home/docker/cp-test_multinode-989933-m03_multinode-989933.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989933 ssh -n multinode-989933-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989933 ssh -n multinode-989933 "sudo cat /home/docker/cp-test_multinode-989933-m03_multinode-989933.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989933 cp multinode-989933-m03:/home/docker/cp-test.txt multinode-989933-m02:/home/docker/cp-test_multinode-989933-m03_multinode-989933-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989933 ssh -n multinode-989933-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989933 ssh -n multinode-989933-m02 "sudo cat /home/docker/cp-test_multinode-989933-m03_multinode-989933-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.58s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989933 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-989933 node stop m03: (1.610479551s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989933 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-989933 status: exit status 7 (446.534281ms)

                                                
                                                
-- stdout --
	multinode-989933
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-989933-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-989933-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989933 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-989933 status --alsologtostderr: exit status 7 (457.671517ms)

                                                
                                                
-- stdout --
	multinode-989933
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-989933-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-989933-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 00:49:50.153749  173942 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:49:50.153898  173942 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:49:50.153909  173942 out.go:374] Setting ErrFile to fd 2...
	I0917 00:49:50.153916  173942 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:49:50.154136  173942 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-141589/.minikube/bin
	I0917 00:49:50.154328  173942 out.go:368] Setting JSON to false
	I0917 00:49:50.154355  173942 mustload.go:65] Loading cluster: multinode-989933
	I0917 00:49:50.154441  173942 notify.go:220] Checking for updates...
	I0917 00:49:50.154741  173942 config.go:182] Loaded profile config "multinode-989933": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:49:50.154773  173942 status.go:174] checking status of multinode-989933 ...
	I0917 00:49:50.155288  173942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 00:49:50.155336  173942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 00:49:50.169828  173942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32931
	I0917 00:49:50.170453  173942 main.go:141] libmachine: () Calling .GetVersion
	I0917 00:49:50.171222  173942 main.go:141] libmachine: Using API Version  1
	I0917 00:49:50.171250  173942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 00:49:50.171642  173942 main.go:141] libmachine: () Calling .GetMachineName
	I0917 00:49:50.171879  173942 main.go:141] libmachine: (multinode-989933) Calling .GetState
	I0917 00:49:50.173587  173942 status.go:371] multinode-989933 host status = "Running" (err=<nil>)
	I0917 00:49:50.173607  173942 host.go:66] Checking if "multinode-989933" exists ...
	I0917 00:49:50.173926  173942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 00:49:50.173975  173942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 00:49:50.188603  173942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41595
	I0917 00:49:50.189188  173942 main.go:141] libmachine: () Calling .GetVersion
	I0917 00:49:50.189832  173942 main.go:141] libmachine: Using API Version  1
	I0917 00:49:50.189942  173942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 00:49:50.190414  173942 main.go:141] libmachine: () Calling .GetMachineName
	I0917 00:49:50.190700  173942 main.go:141] libmachine: (multinode-989933) Calling .GetIP
	I0917 00:49:50.193996  173942 main.go:141] libmachine: (multinode-989933) DBG | domain multinode-989933 has defined MAC address 52:54:00:f0:00:fb in network mk-multinode-989933
	I0917 00:49:50.194519  173942 main.go:141] libmachine: (multinode-989933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:00:fb", ip: ""} in network mk-multinode-989933: {Iface:virbr2 ExpiryTime:2025-09-17 01:46:54 +0000 UTC Type:0 Mac:52:54:00:f0:00:fb Iaid: IPaddr:192.168.50.135 Prefix:24 Hostname:multinode-989933 Clientid:01:52:54:00:f0:00:fb}
	I0917 00:49:50.194570  173942 main.go:141] libmachine: (multinode-989933) DBG | domain multinode-989933 has defined IP address 192.168.50.135 and MAC address 52:54:00:f0:00:fb in network mk-multinode-989933
	I0917 00:49:50.194622  173942 host.go:66] Checking if "multinode-989933" exists ...
	I0917 00:49:50.194985  173942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 00:49:50.195034  173942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 00:49:50.209325  173942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42567
	I0917 00:49:50.209844  173942 main.go:141] libmachine: () Calling .GetVersion
	I0917 00:49:50.210338  173942 main.go:141] libmachine: Using API Version  1
	I0917 00:49:50.210362  173942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 00:49:50.210714  173942 main.go:141] libmachine: () Calling .GetMachineName
	I0917 00:49:50.210920  173942 main.go:141] libmachine: (multinode-989933) Calling .DriverName
	I0917 00:49:50.211091  173942 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:49:50.211132  173942 main.go:141] libmachine: (multinode-989933) Calling .GetSSHHostname
	I0917 00:49:50.214055  173942 main.go:141] libmachine: (multinode-989933) DBG | domain multinode-989933 has defined MAC address 52:54:00:f0:00:fb in network mk-multinode-989933
	I0917 00:49:50.214501  173942 main.go:141] libmachine: (multinode-989933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:00:fb", ip: ""} in network mk-multinode-989933: {Iface:virbr2 ExpiryTime:2025-09-17 01:46:54 +0000 UTC Type:0 Mac:52:54:00:f0:00:fb Iaid: IPaddr:192.168.50.135 Prefix:24 Hostname:multinode-989933 Clientid:01:52:54:00:f0:00:fb}
	I0917 00:49:50.214529  173942 main.go:141] libmachine: (multinode-989933) DBG | domain multinode-989933 has defined IP address 192.168.50.135 and MAC address 52:54:00:f0:00:fb in network mk-multinode-989933
	I0917 00:49:50.214680  173942 main.go:141] libmachine: (multinode-989933) Calling .GetSSHPort
	I0917 00:49:50.214882  173942 main.go:141] libmachine: (multinode-989933) Calling .GetSSHKeyPath
	I0917 00:49:50.215044  173942 main.go:141] libmachine: (multinode-989933) Calling .GetSSHUsername
	I0917 00:49:50.215195  173942 sshutil.go:53] new ssh client: &{IP:192.168.50.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21550-141589/.minikube/machines/multinode-989933/id_rsa Username:docker}
	I0917 00:49:50.300913  173942 ssh_runner.go:195] Run: systemctl --version
	I0917 00:49:50.307428  173942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:49:50.324747  173942 kubeconfig.go:125] found "multinode-989933" server: "https://192.168.50.135:8443"
	I0917 00:49:50.324795  173942 api_server.go:166] Checking apiserver status ...
	I0917 00:49:50.324880  173942 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:49:50.346732  173942 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1370/cgroup
	W0917 00:49:50.360633  173942 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1370/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0917 00:49:50.360708  173942 ssh_runner.go:195] Run: ls
	I0917 00:49:50.367057  173942 api_server.go:253] Checking apiserver healthz at https://192.168.50.135:8443/healthz ...
	I0917 00:49:50.372523  173942 api_server.go:279] https://192.168.50.135:8443/healthz returned 200:
	ok
	I0917 00:49:50.372553  173942 status.go:463] multinode-989933 apiserver status = Running (err=<nil>)
	I0917 00:49:50.372568  173942 status.go:176] multinode-989933 status: &{Name:multinode-989933 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:49:50.372603  173942 status.go:174] checking status of multinode-989933-m02 ...
	I0917 00:49:50.372944  173942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 00:49:50.372986  173942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 00:49:50.387913  173942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37611
	I0917 00:49:50.388474  173942 main.go:141] libmachine: () Calling .GetVersion
	I0917 00:49:50.389176  173942 main.go:141] libmachine: Using API Version  1
	I0917 00:49:50.389220  173942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 00:49:50.389635  173942 main.go:141] libmachine: () Calling .GetMachineName
	I0917 00:49:50.390081  173942 main.go:141] libmachine: (multinode-989933-m02) Calling .GetState
	I0917 00:49:50.392456  173942 status.go:371] multinode-989933-m02 host status = "Running" (err=<nil>)
	I0917 00:49:50.392478  173942 host.go:66] Checking if "multinode-989933-m02" exists ...
	I0917 00:49:50.392809  173942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 00:49:50.392875  173942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 00:49:50.407625  173942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45449
	I0917 00:49:50.408175  173942 main.go:141] libmachine: () Calling .GetVersion
	I0917 00:49:50.408663  173942 main.go:141] libmachine: Using API Version  1
	I0917 00:49:50.408691  173942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 00:49:50.409116  173942 main.go:141] libmachine: () Calling .GetMachineName
	I0917 00:49:50.409345  173942 main.go:141] libmachine: (multinode-989933-m02) Calling .GetIP
	I0917 00:49:50.413053  173942 main.go:141] libmachine: (multinode-989933-m02) DBG | domain multinode-989933-m02 has defined MAC address 52:54:00:d0:1e:db in network mk-multinode-989933
	I0917 00:49:50.413646  173942 main.go:141] libmachine: (multinode-989933-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:1e:db", ip: ""} in network mk-multinode-989933: {Iface:virbr2 ExpiryTime:2025-09-17 01:48:18 +0000 UTC Type:0 Mac:52:54:00:d0:1e:db Iaid: IPaddr:192.168.50.205 Prefix:24 Hostname:multinode-989933-m02 Clientid:01:52:54:00:d0:1e:db}
	I0917 00:49:50.413676  173942 main.go:141] libmachine: (multinode-989933-m02) DBG | domain multinode-989933-m02 has defined IP address 192.168.50.205 and MAC address 52:54:00:d0:1e:db in network mk-multinode-989933
	I0917 00:49:50.413934  173942 host.go:66] Checking if "multinode-989933-m02" exists ...
	I0917 00:49:50.414258  173942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 00:49:50.414337  173942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 00:49:50.430416  173942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36509
	I0917 00:49:50.430992  173942 main.go:141] libmachine: () Calling .GetVersion
	I0917 00:49:50.431620  173942 main.go:141] libmachine: Using API Version  1
	I0917 00:49:50.431660  173942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 00:49:50.432075  173942 main.go:141] libmachine: () Calling .GetMachineName
	I0917 00:49:50.432368  173942 main.go:141] libmachine: (multinode-989933-m02) Calling .DriverName
	I0917 00:49:50.432666  173942 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:49:50.432699  173942 main.go:141] libmachine: (multinode-989933-m02) Calling .GetSSHHostname
	I0917 00:49:50.436564  173942 main.go:141] libmachine: (multinode-989933-m02) DBG | domain multinode-989933-m02 has defined MAC address 52:54:00:d0:1e:db in network mk-multinode-989933
	I0917 00:49:50.437272  173942 main.go:141] libmachine: (multinode-989933-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:1e:db", ip: ""} in network mk-multinode-989933: {Iface:virbr2 ExpiryTime:2025-09-17 01:48:18 +0000 UTC Type:0 Mac:52:54:00:d0:1e:db Iaid: IPaddr:192.168.50.205 Prefix:24 Hostname:multinode-989933-m02 Clientid:01:52:54:00:d0:1e:db}
	I0917 00:49:50.437309  173942 main.go:141] libmachine: (multinode-989933-m02) DBG | domain multinode-989933-m02 has defined IP address 192.168.50.205 and MAC address 52:54:00:d0:1e:db in network mk-multinode-989933
	I0917 00:49:50.437433  173942 main.go:141] libmachine: (multinode-989933-m02) Calling .GetSSHPort
	I0917 00:49:50.437694  173942 main.go:141] libmachine: (multinode-989933-m02) Calling .GetSSHKeyPath
	I0917 00:49:50.437915  173942 main.go:141] libmachine: (multinode-989933-m02) Calling .GetSSHUsername
	I0917 00:49:50.438155  173942 sshutil.go:53] new ssh client: &{IP:192.168.50.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21550-141589/.minikube/machines/multinode-989933-m02/id_rsa Username:docker}
	I0917 00:49:50.526526  173942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:49:50.543684  173942 status.go:176] multinode-989933-m02 status: &{Name:multinode-989933-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:49:50.543724  173942 status.go:174] checking status of multinode-989933-m03 ...
	I0917 00:49:50.544328  173942 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 00:49:50.544380  173942 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 00:49:50.559500  173942 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44277
	I0917 00:49:50.559999  173942 main.go:141] libmachine: () Calling .GetVersion
	I0917 00:49:50.560466  173942 main.go:141] libmachine: Using API Version  1
	I0917 00:49:50.560493  173942 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 00:49:50.560868  173942 main.go:141] libmachine: () Calling .GetMachineName
	I0917 00:49:50.561078  173942 main.go:141] libmachine: (multinode-989933-m03) Calling .GetState
	I0917 00:49:50.562789  173942 status.go:371] multinode-989933-m03 host status = "Stopped" (err=<nil>)
	I0917 00:49:50.562803  173942 status.go:384] host is not running, skipping remaining checks
	I0917 00:49:50.562809  173942 status.go:176] multinode-989933-m03 status: &{Name:multinode-989933-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.52s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (40.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989933 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-989933 node start m03 -v=5 --alsologtostderr: (39.675800866s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989933 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (40.33s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (298.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-989933
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-989933
E0917 00:50:50.331166  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/functional-456067/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:51:53.405425  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/addons-772113/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-989933: (2m51.194397103s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-989933 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-989933 --wait=true -v=5 --alsologtostderr: (2m7.502604563s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-989933
--- PASS: TestMultiNode/serial/RestartKeepsNodes (298.80s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989933 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-989933 node delete m03: (2.187430842s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989933 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.75s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (174.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989933 stop
E0917 00:55:50.330827  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/functional-456067/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:56:36.481640  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/addons-772113/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:56:53.407990  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/addons-772113/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-989933 stop: (2m54.778148632s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989933 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-989933 status: exit status 7 (95.891548ms)

                                                
                                                
-- stdout --
	multinode-989933
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-989933-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989933 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-989933 status --alsologtostderr: exit status 7 (85.183661ms)

                                                
                                                
-- stdout --
	multinode-989933
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-989933-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 00:58:27.372462  176794 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:58:27.372628  176794 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:58:27.372639  176794 out.go:374] Setting ErrFile to fd 2...
	I0917 00:58:27.372645  176794 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:58:27.372847  176794 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-141589/.minikube/bin
	I0917 00:58:27.373082  176794 out.go:368] Setting JSON to false
	I0917 00:58:27.373107  176794 mustload.go:65] Loading cluster: multinode-989933
	I0917 00:58:27.373289  176794 notify.go:220] Checking for updates...
	I0917 00:58:27.373482  176794 config.go:182] Loaded profile config "multinode-989933": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:58:27.373511  176794 status.go:174] checking status of multinode-989933 ...
	I0917 00:58:27.373998  176794 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 00:58:27.374049  176794 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 00:58:27.388844  176794 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41175
	I0917 00:58:27.389318  176794 main.go:141] libmachine: () Calling .GetVersion
	I0917 00:58:27.389986  176794 main.go:141] libmachine: Using API Version  1
	I0917 00:58:27.390014  176794 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 00:58:27.390370  176794 main.go:141] libmachine: () Calling .GetMachineName
	I0917 00:58:27.390557  176794 main.go:141] libmachine: (multinode-989933) Calling .GetState
	I0917 00:58:27.392540  176794 status.go:371] multinode-989933 host status = "Stopped" (err=<nil>)
	I0917 00:58:27.392570  176794 status.go:384] host is not running, skipping remaining checks
	I0917 00:58:27.392580  176794 status.go:176] multinode-989933 status: &{Name:multinode-989933 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:58:27.392606  176794 status.go:174] checking status of multinode-989933-m02 ...
	I0917 00:58:27.392948  176794 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0917 00:58:27.393017  176794 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0917 00:58:27.406534  176794 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37601
	I0917 00:58:27.407042  176794 main.go:141] libmachine: () Calling .GetVersion
	I0917 00:58:27.407484  176794 main.go:141] libmachine: Using API Version  1
	I0917 00:58:27.407517  176794 main.go:141] libmachine: () Calling .SetConfigRaw
	I0917 00:58:27.407920  176794 main.go:141] libmachine: () Calling .GetMachineName
	I0917 00:58:27.408128  176794 main.go:141] libmachine: (multinode-989933-m02) Calling .GetState
	I0917 00:58:27.409989  176794 status.go:371] multinode-989933-m02 host status = "Stopped" (err=<nil>)
	I0917 00:58:27.410004  176794 status.go:384] host is not running, skipping remaining checks
	I0917 00:58:27.410010  176794 status.go:176] multinode-989933-m02 status: &{Name:multinode-989933-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (174.96s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (92.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-989933 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E0917 00:58:53.399999  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/functional-456067/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-989933 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m32.06556674s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-989933 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (92.63s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (43.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-989933
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-989933-m02 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-989933-m02 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 14 (68.56795ms)

                                                
                                                
-- stdout --
	* [multinode-989933-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21550
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21550-141589/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-141589/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-989933-m02' is duplicated with machine name 'multinode-989933-m02' in profile 'multinode-989933'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-989933-m03 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-989933-m03 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (41.965712803s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-989933
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-989933: exit status 80 (233.44145ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-989933 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-989933-m03 already exists in multinode-989933-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-989933-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (43.19s)

                                                
                                    
x
+
TestScheduledStopUnix (111.2s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-373055 --memory=3072 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-373055 --memory=3072 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (39.39469734s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-373055 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-373055 -n scheduled-stop-373055
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-373055 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0917 01:04:08.186687  145530 retry.go:31] will retry after 113.419µs: open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/scheduled-stop-373055/pid: no such file or directory
I0917 01:04:08.187924  145530 retry.go:31] will retry after 147.971µs: open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/scheduled-stop-373055/pid: no such file or directory
I0917 01:04:08.189059  145530 retry.go:31] will retry after 136.588µs: open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/scheduled-stop-373055/pid: no such file or directory
I0917 01:04:08.190229  145530 retry.go:31] will retry after 235.327µs: open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/scheduled-stop-373055/pid: no such file or directory
I0917 01:04:08.191384  145530 retry.go:31] will retry after 663.802µs: open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/scheduled-stop-373055/pid: no such file or directory
I0917 01:04:08.192524  145530 retry.go:31] will retry after 1.067829ms: open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/scheduled-stop-373055/pid: no such file or directory
I0917 01:04:08.193694  145530 retry.go:31] will retry after 989.697µs: open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/scheduled-stop-373055/pid: no such file or directory
I0917 01:04:08.194889  145530 retry.go:31] will retry after 2.202851ms: open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/scheduled-stop-373055/pid: no such file or directory
I0917 01:04:08.198141  145530 retry.go:31] will retry after 2.107545ms: open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/scheduled-stop-373055/pid: no such file or directory
I0917 01:04:08.201366  145530 retry.go:31] will retry after 3.785655ms: open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/scheduled-stop-373055/pid: no such file or directory
I0917 01:04:08.205654  145530 retry.go:31] will retry after 3.961665ms: open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/scheduled-stop-373055/pid: no such file or directory
I0917 01:04:08.209930  145530 retry.go:31] will retry after 4.948847ms: open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/scheduled-stop-373055/pid: no such file or directory
I0917 01:04:08.215337  145530 retry.go:31] will retry after 7.931347ms: open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/scheduled-stop-373055/pid: no such file or directory
I0917 01:04:08.223725  145530 retry.go:31] will retry after 10.78909ms: open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/scheduled-stop-373055/pid: no such file or directory
I0917 01:04:08.234966  145530 retry.go:31] will retry after 17.733295ms: open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/scheduled-stop-373055/pid: no such file or directory
I0917 01:04:08.253253  145530 retry.go:31] will retry after 52.966521ms: open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/scheduled-stop-373055/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-373055 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-373055 -n scheduled-stop-373055
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-373055
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-373055 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-373055
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-373055: exit status 7 (68.364005ms)

                                                
                                                
-- stdout --
	scheduled-stop-373055
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-373055 -n scheduled-stop-373055
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-373055 -n scheduled-stop-373055: exit status 7 (66.339614ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-373055" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-373055
--- PASS: TestScheduledStopUnix (111.20s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (128.87s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.885159918 start -p running-upgrade-626955 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E0917 01:05:50.331228  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/functional-456067/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.885159918 start -p running-upgrade-626955 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m43.487574073s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-626955 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-626955 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (23.909235803s)
helpers_test.go:175: Cleaning up "running-upgrade-626955" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-626955
--- PASS: TestRunningBinaryUpgrade (128.87s)

                                                
                                    
x
+
TestKubernetesUpgrade (130.14s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-661366 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-661366 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (44.50974345s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-661366
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-661366: (2.174224169s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-661366 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-661366 status --format={{.Host}}: exit status 7 (101.548827ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-661366 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-661366 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (35.14144217s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-661366 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-661366 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-661366 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 106 (96.992275ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-661366] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21550
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21550-141589/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-141589/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-661366
	    minikube start -p kubernetes-upgrade-661366 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6613662 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.0, by running:
	    
	    minikube start -p kubernetes-upgrade-661366 --kubernetes-version=v1.34.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-661366 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-661366 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (47.038257302s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-661366" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-661366
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-661366: (1.012346476s)
--- PASS: TestKubernetesUpgrade (130.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-582079 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-582079 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 14 (82.796348ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-582079] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21550
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21550-141589/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-141589/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (84.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-582079 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-582079 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m23.910989488s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-582079 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (84.24s)

                                                
                                    
x
+
TestPause/serial/Start (118.3s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-003341 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-003341 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m58.297661534s)
--- PASS: TestPause/serial/Start (118.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (30.74s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-582079 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E0917 01:06:53.405892  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/addons-772113/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-582079 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (29.168960511s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-582079 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-582079 status -o json: exit status 2 (289.434323ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-582079","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-582079
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-582079: (1.283846326s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (30.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (41.97s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-582079 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-582079 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (41.965273066s)
--- PASS: TestNoKubernetes/serial/Start (41.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-582079 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-582079 "sudo systemctl is-active --quiet service kubelet": exit status 1 (227.963221ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-582079
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-582079: (1.447026365s)
--- PASS: TestNoKubernetes/serial/Stop (1.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (36.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-582079 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-582079 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (36.112355669s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (36.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-582079 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-582079 "sudo systemctl is-active --quiet service kubelet": exit status 1 (216.808779ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-733841 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-733841 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 14 (446.70111ms)

                                                
                                                
-- stdout --
	* [false-733841] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21550
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21550-141589/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-141589/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 01:08:40.567288  184345 out.go:360] Setting OutFile to fd 1 ...
	I0917 01:08:40.567419  184345 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 01:08:40.567427  184345 out.go:374] Setting ErrFile to fd 2...
	I0917 01:08:40.567433  184345 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 01:08:40.567738  184345 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-141589/.minikube/bin
	I0917 01:08:40.568432  184345 out.go:368] Setting JSON to false
	I0917 01:08:40.569591  184345 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":13865,"bootTime":1758057456,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0917 01:08:40.569695  184345 start.go:140] virtualization: kvm guest
	I0917 01:08:40.572051  184345 out.go:179] * [false-733841] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0917 01:08:40.573567  184345 notify.go:220] Checking for updates...
	I0917 01:08:40.573576  184345 out.go:179]   - MINIKUBE_LOCATION=21550
	I0917 01:08:40.575257  184345 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 01:08:40.576881  184345 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-141589/kubeconfig
	I0917 01:08:40.578183  184345 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-141589/.minikube
	I0917 01:08:40.579501  184345 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0917 01:08:40.581037  184345 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 01:08:40.583407  184345 config.go:182] Loaded profile config "cert-expiration-867223": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 01:08:40.583572  184345 config.go:182] Loaded profile config "force-systemd-flag-487816": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 01:08:40.583701  184345 config.go:182] Loaded profile config "pause-003341": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 01:08:40.583814  184345 driver.go:421] Setting default libvirt URI to qemu:///system
	I0917 01:08:40.946370  184345 out.go:179] * Using the kvm2 driver based on user configuration
	I0917 01:08:40.948301  184345 start.go:304] selected driver: kvm2
	I0917 01:08:40.948323  184345 start.go:918] validating driver "kvm2" against <nil>
	I0917 01:08:40.948340  184345 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 01:08:40.950702  184345 out.go:203] 
	W0917 01:08:40.951969  184345 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0917 01:08:40.953238  184345 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-733841 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-733841

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-733841

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-733841

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-733841

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-733841

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-733841

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-733841

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-733841

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-733841

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-733841

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-733841"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-733841"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-733841"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-733841

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-733841"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-733841"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-733841" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-733841" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-733841" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-733841" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-733841" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-733841" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-733841" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-733841" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-733841"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-733841"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-733841"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-733841"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-733841"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-733841" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-733841" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-733841" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-733841"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-733841"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-733841"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-733841"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-733841"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21550-141589/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 17 Sep 2025 01:07:56 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.175:8443
name: cert-expiration-867223
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21550-141589/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 17 Sep 2025 01:07:28 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.83.157:8443
name: pause-003341
contexts:
- context:
cluster: cert-expiration-867223
extensions:
- extension:
last-update: Wed, 17 Sep 2025 01:07:56 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-867223
name: cert-expiration-867223
- context:
cluster: pause-003341
extensions:
- extension:
last-update: Wed, 17 Sep 2025 01:07:28 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-003341
name: pause-003341
current-context: ""
kind: Config
users:
- name: cert-expiration-867223
user:
client-certificate: /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/cert-expiration-867223/client.crt
client-key: /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/cert-expiration-867223/client.key
- name: pause-003341
user:
client-certificate: /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/pause-003341/client.crt
client-key: /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/pause-003341/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-733841

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-733841"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-733841"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-733841"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-733841"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-733841"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-733841"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-733841"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-733841"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-733841"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-733841"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-733841"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-733841"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-733841"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-733841"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-733841"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-733841"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-733841"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-733841"

                                                
                                                
----------------------- debugLogs end: false-733841 [took: 3.395728145s] --------------------------------
helpers_test.go:175: Cleaning up "false-733841" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-733841
--- PASS: TestNetworkPlugins/group/false (4.01s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.55s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.55s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (121.01s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.2857337739 start -p stopped-upgrade-369624 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.2857337739 start -p stopped-upgrade-369624 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m2.536030637s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.2857337739 -p stopped-upgrade-369624 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.2857337739 -p stopped-upgrade-369624 stop: (2.159716609s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-369624 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-369624 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (56.312575194s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (121.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (61.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-005099 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-005099 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0: (1m1.474701227s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (61.47s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.31s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-369624
E0917 01:10:50.331014  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/functional-456067/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-369624: (1.308378858s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (113.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-351017 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-351017 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0: (1m53.292879901s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (113.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (113.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-641452 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-641452 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0: (1m53.961400841s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (113.96s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-005099 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [459fb0fd-4aaf-4b9a-a134-d4852ca915ff] Pending
helpers_test.go:352: "busybox" [459fb0fd-4aaf-4b9a-a134-d4852ca915ff] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [459fb0fd-4aaf-4b9a-a134-d4852ca915ff] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.008465135s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-005099 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-005099 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-005099 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.212744925s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-005099 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (84.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-005099 --alsologtostderr -v=3
E0917 01:11:53.406180  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/addons-772113/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-005099 --alsologtostderr -v=3: (1m24.325939251s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (84.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (86s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-221341 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-221341 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0: (1m25.997963349s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (86.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-351017 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [d9d140b4-8839-497e-b13c-7eaab5c91e53] Pending
helpers_test.go:352: "busybox" [d9d140b4-8839-497e-b13c-7eaab5c91e53] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [d9d140b4-8839-497e-b13c-7eaab5c91e53] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.005041395s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-351017 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-641452 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [65ae5ad7-45c7-4f91-9c3e-e4f6415b0122] Pending
helpers_test.go:352: "busybox" [65ae5ad7-45c7-4f91-9c3e-e4f6415b0122] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [65ae5ad7-45c7-4f91-9c3e-e4f6415b0122] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.006354496s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-641452 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.72s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-351017 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-351017 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.629335757s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-351017 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.72s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (79.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-351017 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-351017 --alsologtostderr -v=3: (1m19.883340182s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (79.88s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-641452 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-641452 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (77.86s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-641452 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-641452 --alsologtostderr -v=3: (1m17.861328374s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (77.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-005099 -n old-k8s-version-005099
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-005099 -n old-k8s-version-005099: exit status 7 (80.81216ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-005099 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (43.77s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-005099 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0
E0917 01:13:16.483065  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/addons-772113/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-005099 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0: (43.449317089s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-005099 -n old-k8s-version-005099
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (43.77s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-221341 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [a1fa5af5-ca05-4d3d-ba4d-4325940073e1] Pending
helpers_test.go:352: "busybox" [a1fa5af5-ca05-4d3d-ba4d-4325940073e1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [a1fa5af5-ca05-4d3d-ba4d-4325940073e1] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004777435s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-221341 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (17.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-t44vm" [30efb598-1b63-407d-bebb-0b3ab0497111] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-t44vm" [30efb598-1b63-407d-bebb-0b3ab0497111] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 17.004319764s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (17.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-221341 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-221341 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.096118436s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-221341 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (84.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-221341 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-221341 --alsologtostderr -v=3: (1m24.273457788s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (84.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-t44vm" [30efb598-1b63-407d-bebb-0b3ab0497111] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004691333s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-005099 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-641452 -n embed-certs-641452
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-641452 -n embed-certs-641452: exit status 7 (77.877553ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-641452 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (46.46s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-641452 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-641452 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0: (46.013528718s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-641452 -n embed-certs-641452
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (46.46s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-351017 -n no-preload-351017
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-351017 -n no-preload-351017: exit status 7 (74.874715ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-351017 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (76.61s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-351017 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-351017 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0: (1m16.291553799s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-351017 -n no-preload-351017
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (76.61s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-005099 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-005099 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-005099 -n old-k8s-version-005099
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-005099 -n old-k8s-version-005099: exit status 2 (265.766928ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-005099 -n old-k8s-version-005099
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-005099 -n old-k8s-version-005099: exit status 2 (258.692608ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-005099 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-005099 -n old-k8s-version-005099
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-005099 -n old-k8s-version-005099
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.97s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (81.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-329750 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-329750 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0: (1m21.403025386s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (81.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-tzkqq" [135d7a36-4926-433a-8212-af1d42535ce3] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005549276s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-tzkqq" [135d7a36-4926-433a-8212-af1d42535ce3] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005099321s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-641452 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-641452 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.57s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-641452 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-641452 -n embed-certs-641452
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-641452 -n embed-certs-641452: exit status 2 (313.149929ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-641452 -n embed-certs-641452
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-641452 -n embed-certs-641452: exit status 2 (280.1647ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-641452 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p embed-certs-641452 --alsologtostderr -v=1: (1.145692327s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-641452 -n embed-certs-641452
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-641452 -n embed-certs-641452
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (90.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-733841 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-733841 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m30.088375892s)
--- PASS: TestNetworkPlugins/group/auto/Start (90.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-221341 -n default-k8s-diff-port-221341
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-221341 -n default-k8s-diff-port-221341: exit status 7 (79.991954ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-221341 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (62.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-221341 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-221341 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0: (1m1.652711671s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-221341 -n default-k8s-diff-port-221341
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (62.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (18.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-h8zsn" [60600bf6-3310-4a84-ad74-1c3e78acdc5e] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0917 01:15:33.402273  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/functional-456067/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-h8zsn" [60600bf6-3310-4a84-ad74-1c3e78acdc5e] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 18.009742336s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (18.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.6s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-329750 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-329750 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.60271556s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.60s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.63s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-329750 --alsologtostderr -v=3
E0917 01:15:50.331324  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/functional-456067/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-329750 --alsologtostderr -v=3: (10.633690341s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.63s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-h8zsn" [60600bf6-3310-4a84-ad74-1c3e78acdc5e] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003885382s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-351017 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-351017 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-329750 -n newest-cni-329750
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-329750 -n newest-cni-329750: exit status 7 (87.515746ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-329750 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.62s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-351017 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p no-preload-351017 --alsologtostderr -v=1: (1.100624512s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-351017 -n no-preload-351017
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-351017 -n no-preload-351017: exit status 2 (314.619302ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-351017 -n no-preload-351017
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-351017 -n no-preload-351017: exit status 2 (331.460132ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-351017 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p no-preload-351017 --alsologtostderr -v=1: (1.042618759s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-351017 -n no-preload-351017
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-351017 -n no-preload-351017
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.62s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (40s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-329750 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-329750 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0: (39.656002321s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-329750 -n newest-cni-329750
E0917 01:16:37.511460  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/old-k8s-version-005099/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 01:16:37.517979  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/old-k8s-version-005099/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 01:16:37.529483  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/old-k8s-version-005099/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 01:16:37.551116  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/old-k8s-version-005099/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 01:16:37.592665  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/old-k8s-version-005099/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (40.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (79.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-733841 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-733841 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m19.702594723s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (79.70s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (14.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-bm9cr" [46894dfe-40c1-4da3-8f69-4c50fa8a0448] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-bm9cr" [46894dfe-40c1-4da3-8f69-4c50fa8a0448] Running
E0917 01:16:37.837061  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/old-k8s-version-005099/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.005709915s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (14.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-329750 image list --format=json
E0917 01:16:37.674836  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/old-k8s-version-005099/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-329750 --alsologtostderr -v=1
E0917 01:16:38.159350  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/old-k8s-version-005099/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 01:16:38.801390  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/old-k8s-version-005099/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-329750 --alsologtostderr -v=1: (1.016189759s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-329750 -n newest-cni-329750
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-329750 -n newest-cni-329750: exit status 2 (283.232487ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-329750 -n newest-cni-329750
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-329750 -n newest-cni-329750: exit status 2 (287.495294ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-329750 --alsologtostderr -v=1
E0917 01:16:40.083680  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/old-k8s-version-005099/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-329750 -n newest-cni-329750
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-329750 -n newest-cni-329750
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (94.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-733841 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E0917 01:16:42.645556  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/old-k8s-version-005099/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-733841 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m34.637048834s)
--- PASS: TestNetworkPlugins/group/calico/Start (94.64s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-bm9cr" [46894dfe-40c1-4da3-8f69-4c50fa8a0448] Running
E0917 01:16:47.767812  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/old-k8s-version-005099/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004824518s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-221341 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-221341 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-733841 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-221341 --alsologtostderr -v=1
I0917 01:16:49.212363  145530 config.go:182] Loaded profile config "auto-733841": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p default-k8s-diff-port-221341 --alsologtostderr -v=1: (1.19367193s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-221341 -n default-k8s-diff-port-221341
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-221341 -n default-k8s-diff-port-221341: exit status 2 (330.093503ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-221341 -n default-k8s-diff-port-221341
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-221341 -n default-k8s-diff-port-221341: exit status 2 (338.800652ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-221341 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p default-k8s-diff-port-221341 --alsologtostderr -v=1: (1.118088318s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-221341 -n default-k8s-diff-port-221341
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-221341 -n default-k8s-diff-port-221341
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.84s)
E0917 01:18:58.003336  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/default-k8s-diff-port-221341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-733841 replace --force -f testdata/netcat-deployment.yaml
I0917 01:16:49.788129  145530 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-29ktk" [bbbbecde-2d06-4905-b6d6-1a8c9cec5867] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-29ktk" [bbbbecde-2d06-4905-b6d6-1a8c9cec5867] Running
E0917 01:16:58.010112  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/old-k8s-version-005099/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.003962471s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (92.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-733841 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-733841 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m32.628609008s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (92.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-733841 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-733841 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-733841 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (101.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-733841 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-733841 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m41.47085371s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (101.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-7p746" [0326c289-ea36-4381-9e5a-2f6e3eefd401] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005179845s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-733841 "pgrep -a kubelet"
I0917 01:17:28.472700  145530 config.go:182] Loaded profile config "kindnet-733841": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-733841 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-jcfq9" [6b67333b-d661-4c4a-ab7e-fd45af754eef] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-jcfq9" [6b67333b-d661-4c4a-ab7e-fd45af754eef] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.007066053s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-733841 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-733841 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-733841 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (73.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-733841 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E0917 01:18:05.949566  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/no-preload-351017/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-733841 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m13.47848205s)
--- PASS: TestNetworkPlugins/group/flannel/Start (73.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-k7cl6" [f816ffc9-7e24-4d17-8239-8483df8fa85b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005059681s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-733841 "pgrep -a kubelet"
I0917 01:18:23.376743  145530 config.go:182] Loaded profile config "calico-733841": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-733841 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-65sg6" [7e2c0c24-fd3d-4c6c-8a48-a26701cb5361] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0917 01:18:26.431757  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/no-preload-351017/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-65sg6" [7e2c0c24-fd3d-4c6c-8a48-a26701cb5361] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.004827396s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-733841 "pgrep -a kubelet"
I0917 01:18:27.407354  145530 config.go:182] Loaded profile config "custom-flannel-733841": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-733841 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-drf92" [5c3e4589-129f-4316-b29f-c3257e40b58f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-drf92" [5c3e4589-129f-4316-b29f-c3257e40b58f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004776569s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-733841 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-733841 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-733841 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-733841 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-733841 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-733841 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (82.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-733841 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E0917 01:18:54.160081  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/default-k8s-diff-port-221341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 01:18:55.441434  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/default-k8s-diff-port-221341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-733841 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m22.712831088s)
--- PASS: TestNetworkPlugins/group/bridge/Start (82.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-733841 "pgrep -a kubelet"
I0917 01:19:01.210687  145530 config.go:182] Loaded profile config "enable-default-cni-733841": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-733841 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-l97d6" [8d20e32d-f54c-4bc0-86c8-3d2f88d166ea] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0917 01:19:03.124972  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/default-k8s-diff-port-221341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-l97d6" [8d20e32d-f54c-4bc0-86c8-3d2f88d166ea] Running
E0917 01:19:07.393409  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/no-preload-351017/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.005263206s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-733841 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-733841 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-733841 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-ndq96" [727d92be-3633-4463-9376-2a851b6e65ff] Running
E0917 01:19:13.367351  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/default-k8s-diff-port-221341/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00473245s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-733841 "pgrep -a kubelet"
I0917 01:19:19.552056  145530 config.go:182] Loaded profile config "flannel-733841": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-733841 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-7d4dj" [069f7479-3012-437b-807e-a48576aeccc4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0917 01:19:21.376873  145530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/old-k8s-version-005099/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-7d4dj" [069f7479-3012-437b-807e-a48576aeccc4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.003575376s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-733841 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-733841 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-733841 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-733841 "pgrep -a kubelet"
I0917 01:20:16.923906  145530 config.go:182] Loaded profile config "bridge-733841": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-733841 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-bh7c5" [c5469435-82ac-46b3-926f-efe7155f5e0f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-bh7c5" [c5469435-82ac-46b3-926f-efe7155f5e0f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004956588s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-733841 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-733841 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-733841 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    

Test skip (40/324)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.0/cached-images 0
15 TestDownloadOnly/v1.34.0/binaries 0
16 TestDownloadOnly/v1.34.0/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.35
33 TestAddons/serial/GCPAuth/RealCredentials 0
40 TestAddons/parallel/Olm 0
47 TestAddons/parallel/AmdGpuDevicePlugin 0
51 TestDockerFlags 0
54 TestDockerEnvContainerd 0
56 TestHyperKitDriverInstallOrUpdate 0
57 TestHyperkitDriverSkipUpgrade 0
108 TestFunctional/parallel/DockerEnv 0
109 TestFunctional/parallel/PodmanEnv 0
129 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
130 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
131 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
132 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
133 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
134 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
135 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
136 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
157 TestFunctionalNewestKubernetes 0
158 TestGvisorAddon 0
180 TestImageBuild 0
207 TestKicCustomNetwork 0
208 TestKicExistingNetwork 0
209 TestKicCustomSubnet 0
210 TestKicStaticIP 0
242 TestChangeNoneUser 0
245 TestScheduledStopWindows 0
247 TestSkaffold 0
249 TestInsufficientStorage 0
253 TestMissingContainerUpgrade 0
262 TestStartStop/group/disable-driver-mounts 0.15
276 TestNetworkPlugins/group/kubenet 3.41
284 TestNetworkPlugins/group/cilium 3.58
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.35s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-772113 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.35s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-432780" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-432780
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-733841 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-733841

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-733841

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-733841

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-733841

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-733841

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-733841

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-733841

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-733841

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-733841

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-733841

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-733841"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-733841"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-733841"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-733841

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-733841"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-733841"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-733841" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-733841" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-733841" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-733841" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-733841" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-733841" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-733841" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-733841" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-733841"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-733841"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-733841"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-733841"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-733841"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-733841" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-733841" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-733841" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-733841"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-733841"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-733841"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-733841"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-733841"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21550-141589/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 17 Sep 2025 01:07:56 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.175:8443
name: cert-expiration-867223
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21550-141589/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 17 Sep 2025 01:08:38 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.61.4:8443
name: force-systemd-flag-487816
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21550-141589/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 17 Sep 2025 01:07:28 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.83.157:8443
name: pause-003341
contexts:
- context:
cluster: cert-expiration-867223
extensions:
- extension:
last-update: Wed, 17 Sep 2025 01:07:56 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-867223
name: cert-expiration-867223
- context:
cluster: force-systemd-flag-487816
extensions:
- extension:
last-update: Wed, 17 Sep 2025 01:08:38 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: force-systemd-flag-487816
name: force-systemd-flag-487816
- context:
cluster: pause-003341
extensions:
- extension:
last-update: Wed, 17 Sep 2025 01:07:28 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-003341
name: pause-003341
current-context: force-systemd-flag-487816
kind: Config
users:
- name: cert-expiration-867223
user:
client-certificate: /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/cert-expiration-867223/client.crt
client-key: /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/cert-expiration-867223/client.key
- name: force-systemd-flag-487816
user:
client-certificate: /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/force-systemd-flag-487816/client.crt
client-key: /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/force-systemd-flag-487816/client.key
- name: pause-003341
user:
client-certificate: /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/pause-003341/client.crt
client-key: /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/pause-003341/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-733841

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-733841"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-733841"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-733841"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-733841"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-733841"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-733841"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-733841"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-733841"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-733841"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-733841"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-733841"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-733841"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-733841"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-733841"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-733841"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-733841"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-733841"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-733841"

                                                
                                                
----------------------- debugLogs end: kubenet-733841 [took: 3.232675117s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-733841" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-733841
--- SKIP: TestNetworkPlugins/group/kubenet (3.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-733841 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-733841

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-733841

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-733841

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-733841

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-733841

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-733841

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-733841

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-733841

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-733841

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-733841

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-733841"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-733841"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-733841"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-733841

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-733841"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-733841"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-733841" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-733841" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-733841" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-733841" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-733841" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-733841" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-733841" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-733841" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-733841"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-733841"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-733841"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-733841"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-733841"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-733841

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-733841

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-733841" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-733841" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-733841

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-733841

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-733841" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-733841" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-733841" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-733841" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-733841" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-733841"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-733841"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-733841"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-733841"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-733841"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21550-141589/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 17 Sep 2025 01:07:56 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.175:8443
name: cert-expiration-867223
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21550-141589/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 17 Sep 2025 01:07:28 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.83.157:8443
name: pause-003341
contexts:
- context:
cluster: cert-expiration-867223
extensions:
- extension:
last-update: Wed, 17 Sep 2025 01:07:56 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-867223
name: cert-expiration-867223
- context:
cluster: pause-003341
extensions:
- extension:
last-update: Wed, 17 Sep 2025 01:07:28 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-003341
name: pause-003341
current-context: ""
kind: Config
users:
- name: cert-expiration-867223
user:
client-certificate: /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/cert-expiration-867223/client.crt
client-key: /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/cert-expiration-867223/client.key
- name: pause-003341
user:
client-certificate: /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/pause-003341/client.crt
client-key: /home/jenkins/minikube-integration/21550-141589/.minikube/profiles/pause-003341/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-733841

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-733841"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-733841"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-733841"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-733841"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-733841"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-733841"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-733841"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-733841"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-733841"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-733841"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-733841"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-733841"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-733841"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-733841"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-733841"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-733841"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-733841"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-733841" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-733841"

                                                
                                                
----------------------- debugLogs end: cilium-733841 [took: 3.416410065s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-733841" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-733841
--- SKIP: TestNetworkPlugins/group/cilium (3.58s)

                                                
                                    
Copied to clipboard