Test Report: KVM_Linux_crio 21594

                    
                      532dacb4acf31553658ff6b0bf62fcf9309f2277:2025-09-19:41507
                    
                

Test fail (12/330)

x
+
TestAddons/parallel/Ingress (158.78s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-266998 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-266998 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-266998 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [71e323d6-d37d-4ac6-88ea-2a015c817d71] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [71e323d6-d37d-4ac6-88ea-2a015c817d71] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003409367s
I0919 22:17:23.414119   18671 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-266998 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-266998 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m16.335980252s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-266998 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-266998 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.205
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-266998 -n addons-266998
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-266998 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-266998 logs -n 25: (1.341789724s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                                ARGS                                                                                                                                                                                                                                                │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-098176                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ download-only-098176 │ jenkins │ v1.37.0 │ 19 Sep 25 22:14 UTC │ 19 Sep 25 22:14 UTC │
	│ start   │ --download-only -p binary-mirror-536834 --alsologtostderr --binary-mirror http://127.0.0.1:39355 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                                                                                                                                                                                                               │ binary-mirror-536834 │ jenkins │ v1.37.0 │ 19 Sep 25 22:14 UTC │                     │
	│ delete  │ -p binary-mirror-536834                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ binary-mirror-536834 │ jenkins │ v1.37.0 │ 19 Sep 25 22:14 UTC │ 19 Sep 25 22:14 UTC │
	│ addons  │ disable dashboard -p addons-266998                                                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-266998        │ jenkins │ v1.37.0 │ 19 Sep 25 22:14 UTC │                     │
	│ addons  │ enable dashboard -p addons-266998                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-266998        │ jenkins │ v1.37.0 │ 19 Sep 25 22:14 UTC │                     │
	│ start   │ -p addons-266998 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-266998        │ jenkins │ v1.37.0 │ 19 Sep 25 22:14 UTC │ 19 Sep 25 22:16 UTC │
	│ addons  │ addons-266998 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-266998        │ jenkins │ v1.37.0 │ 19 Sep 25 22:16 UTC │ 19 Sep 25 22:16 UTC │
	│ addons  │ addons-266998 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-266998        │ jenkins │ v1.37.0 │ 19 Sep 25 22:16 UTC │ 19 Sep 25 22:16 UTC │
	│ addons  │ enable headlamp -p addons-266998 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-266998        │ jenkins │ v1.37.0 │ 19 Sep 25 22:16 UTC │ 19 Sep 25 22:16 UTC │
	│ addons  │ addons-266998 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-266998        │ jenkins │ v1.37.0 │ 19 Sep 25 22:16 UTC │ 19 Sep 25 22:17 UTC │
	│ addons  │ addons-266998 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-266998        │ jenkins │ v1.37.0 │ 19 Sep 25 22:16 UTC │ 19 Sep 25 22:17 UTC │
	│ ip      │ addons-266998 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-266998        │ jenkins │ v1.37.0 │ 19 Sep 25 22:17 UTC │ 19 Sep 25 22:17 UTC │
	│ addons  │ addons-266998 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-266998        │ jenkins │ v1.37.0 │ 19 Sep 25 22:17 UTC │ 19 Sep 25 22:17 UTC │
	│ addons  │ addons-266998 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-266998        │ jenkins │ v1.37.0 │ 19 Sep 25 22:17 UTC │ 19 Sep 25 22:17 UTC │
	│ addons  │ addons-266998 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-266998        │ jenkins │ v1.37.0 │ 19 Sep 25 22:17 UTC │ 19 Sep 25 22:17 UTC │
	│ addons  │ addons-266998 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-266998        │ jenkins │ v1.37.0 │ 19 Sep 25 22:17 UTC │ 19 Sep 25 22:17 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-266998                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-266998        │ jenkins │ v1.37.0 │ 19 Sep 25 22:17 UTC │ 19 Sep 25 22:17 UTC │
	│ addons  │ addons-266998 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-266998        │ jenkins │ v1.37.0 │ 19 Sep 25 22:17 UTC │ 19 Sep 25 22:17 UTC │
	│ addons  │ addons-266998 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-266998        │ jenkins │ v1.37.0 │ 19 Sep 25 22:17 UTC │ 19 Sep 25 22:17 UTC │
	│ ssh     │ addons-266998 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-266998        │ jenkins │ v1.37.0 │ 19 Sep 25 22:17 UTC │                     │
	│ ssh     │ addons-266998 ssh cat /opt/local-path-provisioner/pvc-9797f505-00b1-448b-b622-5acde1f9687f_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                                                  │ addons-266998        │ jenkins │ v1.37.0 │ 19 Sep 25 22:17 UTC │ 19 Sep 25 22:17 UTC │
	│ addons  │ addons-266998 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-266998        │ jenkins │ v1.37.0 │ 19 Sep 25 22:17 UTC │ 19 Sep 25 22:18 UTC │
	│ addons  │ addons-266998 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-266998        │ jenkins │ v1.37.0 │ 19 Sep 25 22:17 UTC │ 19 Sep 25 22:17 UTC │
	│ addons  │ addons-266998 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-266998        │ jenkins │ v1.37.0 │ 19 Sep 25 22:17 UTC │ 19 Sep 25 22:17 UTC │
	│ ip      │ addons-266998 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-266998        │ jenkins │ v1.37.0 │ 19 Sep 25 22:19 UTC │ 19 Sep 25 22:19 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/19 22:14:18
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 22:14:18.505047   19299 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:14:18.505274   19299 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:14:18.505282   19299 out.go:374] Setting ErrFile to fd 2...
	I0919 22:14:18.505286   19299 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:14:18.505480   19299 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14764/.minikube/bin
	I0919 22:14:18.505996   19299 out.go:368] Setting JSON to false
	I0919 22:14:18.506776   19299 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3385,"bootTime":1758316673,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 22:14:18.506858   19299 start.go:140] virtualization: kvm guest
	I0919 22:14:18.508589   19299 out.go:179] * [addons-266998] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0919 22:14:18.509932   19299 notify.go:220] Checking for updates...
	I0919 22:14:18.509947   19299 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 22:14:18.511065   19299 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 22:14:18.512523   19299 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-14764/kubeconfig
	I0919 22:14:18.513647   19299 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14764/.minikube
	I0919 22:14:18.514752   19299 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 22:14:18.515825   19299 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 22:14:18.517098   19299 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 22:14:18.546752   19299 out.go:179] * Using the kvm2 driver based on user configuration
	I0919 22:14:18.547883   19299 start.go:304] selected driver: kvm2
	I0919 22:14:18.547898   19299 start.go:918] validating driver "kvm2" against <nil>
	I0919 22:14:18.547920   19299 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 22:14:18.548565   19299 install.go:51] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:14:18.548641   19299 install.go:123] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21594-14764/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0919 22:14:18.562223   19299 install.go:134] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is {Version:v1.37.0 Commit:6e37ee63f758843bb5fe33c3a528c564c4b83d53}
	I0919 22:14:18.562254   19299 install.go:123] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21594-14764/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0919 22:14:18.576594   19299 install.go:134] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is {Version:v1.37.0 Commit:6e37ee63f758843bb5fe33c3a528c564c4b83d53}
	I0919 22:14:18.576639   19299 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0919 22:14:18.576937   19299 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:14:18.576976   19299 cni.go:84] Creating CNI manager for ""
	I0919 22:14:18.577033   19299 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0919 22:14:18.577045   19299 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0919 22:14:18.577119   19299 start.go:348] cluster config:
	{Name:addons-266998 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-266998 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:14:18.577247   19299 iso.go:125] acquiring lock: {Name:mk21ede999fca7478b081d3e470ef3cc88b140f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:14:18.579076   19299 out.go:179] * Starting "addons-266998" primary control-plane node in "addons-266998" cluster
	I0919 22:14:18.580383   19299 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 22:14:18.580418   19299 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21594-14764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0919 22:14:18.580428   19299 cache.go:58] Caching tarball of preloaded images
	I0919 22:14:18.580509   19299 preload.go:172] Found /home/jenkins/minikube-integration/21594-14764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 22:14:18.580521   19299 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0919 22:14:18.580830   19299 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/addons-266998/config.json ...
	I0919 22:14:18.580855   19299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/addons-266998/config.json: {Name:mk239dd20fde67316fe8540f84edd3ee6e8f7abc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:14:18.581011   19299 start.go:360] acquireMachinesLock for addons-266998: {Name:mke6cd936cf5da66e4fbcd4dcd8a2d3d3cae6c7b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 22:14:18.581058   19299 start.go:364] duration metric: took 34.328µs to acquireMachinesLock for "addons-266998"
	I0919 22:14:18.581074   19299 start.go:93] Provisioning new machine with config: &{Name:addons-266998 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.37.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-266998 Namespa
ce:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 22:14:18.581120   19299 start.go:125] createHost starting for "" (driver="kvm2")
	I0919 22:14:18.582611   19299 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0919 22:14:18.582756   19299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 22:14:18.582802   19299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 22:14:18.595668   19299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43743
	I0919 22:14:18.596170   19299 main.go:141] libmachine: () Calling .GetVersion
	I0919 22:14:18.596681   19299 main.go:141] libmachine: Using API Version  1
	I0919 22:14:18.596702   19299 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 22:14:18.597051   19299 main.go:141] libmachine: () Calling .GetMachineName
	I0919 22:14:18.597198   19299 main.go:141] libmachine: (addons-266998) Calling .GetMachineName
	I0919 22:14:18.597348   19299 main.go:141] libmachine: (addons-266998) Calling .DriverName
	I0919 22:14:18.597466   19299 start.go:159] libmachine.API.Create for "addons-266998" (driver="kvm2")
	I0919 22:14:18.597506   19299 client.go:168] LocalClient.Create starting
	I0919 22:14:18.597582   19299 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21594-14764/.minikube/certs/ca.pem
	I0919 22:14:18.807124   19299 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21594-14764/.minikube/certs/cert.pem
	I0919 22:14:19.218580   19299 main.go:141] libmachine: Running pre-create checks...
	I0919 22:14:19.218604   19299 main.go:141] libmachine: (addons-266998) Calling .PreCreateCheck
	I0919 22:14:19.219112   19299 main.go:141] libmachine: (addons-266998) Calling .GetConfigRaw
	I0919 22:14:19.219576   19299 main.go:141] libmachine: Creating machine...
	I0919 22:14:19.219591   19299 main.go:141] libmachine: (addons-266998) Calling .Create
	I0919 22:14:19.219768   19299 main.go:141] libmachine: (addons-266998) creating domain...
	I0919 22:14:19.219781   19299 main.go:141] libmachine: (addons-266998) creating network...
	I0919 22:14:19.221241   19299 main.go:141] libmachine: (addons-266998) DBG | found existing default network
	I0919 22:14:19.221377   19299 main.go:141] libmachine: (addons-266998) DBG | <network>
	I0919 22:14:19.221396   19299 main.go:141] libmachine: (addons-266998) DBG |   <name>default</name>
	I0919 22:14:19.221407   19299 main.go:141] libmachine: (addons-266998) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I0919 22:14:19.221420   19299 main.go:141] libmachine: (addons-266998) DBG |   <forward mode='nat'>
	I0919 22:14:19.221430   19299 main.go:141] libmachine: (addons-266998) DBG |     <nat>
	I0919 22:14:19.221443   19299 main.go:141] libmachine: (addons-266998) DBG |       <port start='1024' end='65535'/>
	I0919 22:14:19.221452   19299 main.go:141] libmachine: (addons-266998) DBG |     </nat>
	I0919 22:14:19.221462   19299 main.go:141] libmachine: (addons-266998) DBG |   </forward>
	I0919 22:14:19.221473   19299 main.go:141] libmachine: (addons-266998) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I0919 22:14:19.221483   19299 main.go:141] libmachine: (addons-266998) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I0919 22:14:19.221508   19299 main.go:141] libmachine: (addons-266998) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I0919 22:14:19.221525   19299 main.go:141] libmachine: (addons-266998) DBG |     <dhcp>
	I0919 22:14:19.221539   19299 main.go:141] libmachine: (addons-266998) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I0919 22:14:19.221550   19299 main.go:141] libmachine: (addons-266998) DBG |     </dhcp>
	I0919 22:14:19.221561   19299 main.go:141] libmachine: (addons-266998) DBG |   </ip>
	I0919 22:14:19.221579   19299 main.go:141] libmachine: (addons-266998) DBG | </network>
	I0919 22:14:19.221590   19299 main.go:141] libmachine: (addons-266998) DBG | 
	I0919 22:14:19.222159   19299 main.go:141] libmachine: (addons-266998) DBG | I0919 22:14:19.221979   19327 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000208dd0}
	I0919 22:14:19.222188   19299 main.go:141] libmachine: (addons-266998) DBG | defining private network:
	I0919 22:14:19.222196   19299 main.go:141] libmachine: (addons-266998) DBG | 
	I0919 22:14:19.222201   19299 main.go:141] libmachine: (addons-266998) DBG | <network>
	I0919 22:14:19.222212   19299 main.go:141] libmachine: (addons-266998) DBG |   <name>mk-addons-266998</name>
	I0919 22:14:19.222234   19299 main.go:141] libmachine: (addons-266998) DBG |   <dns enable='no'/>
	I0919 22:14:19.222248   19299 main.go:141] libmachine: (addons-266998) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0919 22:14:19.222255   19299 main.go:141] libmachine: (addons-266998) DBG |     <dhcp>
	I0919 22:14:19.222262   19299 main.go:141] libmachine: (addons-266998) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0919 22:14:19.222272   19299 main.go:141] libmachine: (addons-266998) DBG |     </dhcp>
	I0919 22:14:19.222280   19299 main.go:141] libmachine: (addons-266998) DBG |   </ip>
	I0919 22:14:19.222284   19299 main.go:141] libmachine: (addons-266998) DBG | </network>
	I0919 22:14:19.222290   19299 main.go:141] libmachine: (addons-266998) DBG | 
	I0919 22:14:19.228408   19299 main.go:141] libmachine: (addons-266998) DBG | creating private network mk-addons-266998 192.168.39.0/24...
	I0919 22:14:19.297975   19299 main.go:141] libmachine: (addons-266998) DBG | private network mk-addons-266998 192.168.39.0/24 created
	I0919 22:14:19.298282   19299 main.go:141] libmachine: (addons-266998) setting up store path in /home/jenkins/minikube-integration/21594-14764/.minikube/machines/addons-266998 ...
	I0919 22:14:19.298301   19299 main.go:141] libmachine: (addons-266998) DBG | <network>
	I0919 22:14:19.298314   19299 main.go:141] libmachine: (addons-266998) building disk image from file:///home/jenkins/minikube-integration/21594-14764/.minikube/cache/iso/amd64/minikube-v1.37.0-amd64.iso
	I0919 22:14:19.298325   19299 main.go:141] libmachine: (addons-266998) DBG |   <name>mk-addons-266998</name>
	I0919 22:14:19.298340   19299 main.go:141] libmachine: (addons-266998) DBG |   <uuid>37624841-e4ec-439c-80e1-b5f2017cb440</uuid>
	I0919 22:14:19.298371   19299 main.go:141] libmachine: (addons-266998) DBG |   <bridge name='virbr1' stp='on' delay='0'/>
	I0919 22:14:19.298390   19299 main.go:141] libmachine: (addons-266998) DBG |   <mac address='52:54:00:e4:e3:2b'/>
	I0919 22:14:19.298417   19299 main.go:141] libmachine: (addons-266998) Downloading /home/jenkins/minikube-integration/21594-14764/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21594-14764/.minikube/cache/iso/amd64/minikube-v1.37.0-amd64.iso...
	I0919 22:14:19.298436   19299 main.go:141] libmachine: (addons-266998) DBG |   <dns enable='no'/>
	I0919 22:14:19.298447   19299 main.go:141] libmachine: (addons-266998) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0919 22:14:19.298460   19299 main.go:141] libmachine: (addons-266998) DBG |     <dhcp>
	I0919 22:14:19.298474   19299 main.go:141] libmachine: (addons-266998) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0919 22:14:19.298485   19299 main.go:141] libmachine: (addons-266998) DBG |     </dhcp>
	I0919 22:14:19.298498   19299 main.go:141] libmachine: (addons-266998) DBG |   </ip>
	I0919 22:14:19.298516   19299 main.go:141] libmachine: (addons-266998) DBG | </network>
	I0919 22:14:19.298532   19299 main.go:141] libmachine: (addons-266998) DBG | 
	I0919 22:14:19.298576   19299 main.go:141] libmachine: (addons-266998) DBG | I0919 22:14:19.298248   19327 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21594-14764/.minikube
	I0919 22:14:19.549663   19299 main.go:141] libmachine: (addons-266998) DBG | I0919 22:14:19.549545   19327 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21594-14764/.minikube/machines/addons-266998/id_rsa...
	I0919 22:14:19.589352   19299 main.go:141] libmachine: (addons-266998) DBG | I0919 22:14:19.589238   19327 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21594-14764/.minikube/machines/addons-266998/addons-266998.rawdisk...
	I0919 22:14:19.589373   19299 main.go:141] libmachine: (addons-266998) DBG | Writing magic tar header
	I0919 22:14:19.589389   19299 main.go:141] libmachine: (addons-266998) DBG | Writing SSH key tar header
	I0919 22:14:19.589397   19299 main.go:141] libmachine: (addons-266998) DBG | I0919 22:14:19.589365   19327 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21594-14764/.minikube/machines/addons-266998 ...
	I0919 22:14:19.589519   19299 main.go:141] libmachine: (addons-266998) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21594-14764/.minikube/machines/addons-266998
	I0919 22:14:19.589535   19299 main.go:141] libmachine: (addons-266998) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21594-14764/.minikube/machines
	I0919 22:14:19.589543   19299 main.go:141] libmachine: (addons-266998) setting executable bit set on /home/jenkins/minikube-integration/21594-14764/.minikube/machines/addons-266998 (perms=drwx------)
	I0919 22:14:19.589553   19299 main.go:141] libmachine: (addons-266998) setting executable bit set on /home/jenkins/minikube-integration/21594-14764/.minikube/machines (perms=drwxr-xr-x)
	I0919 22:14:19.589568   19299 main.go:141] libmachine: (addons-266998) setting executable bit set on /home/jenkins/minikube-integration/21594-14764/.minikube (perms=drwxr-xr-x)
	I0919 22:14:19.589577   19299 main.go:141] libmachine: (addons-266998) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21594-14764/.minikube
	I0919 22:14:19.589588   19299 main.go:141] libmachine: (addons-266998) setting executable bit set on /home/jenkins/minikube-integration/21594-14764 (perms=drwxrwxr-x)
	I0919 22:14:19.589601   19299 main.go:141] libmachine: (addons-266998) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0919 22:14:19.589622   19299 main.go:141] libmachine: (addons-266998) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0919 22:14:19.589635   19299 main.go:141] libmachine: (addons-266998) defining domain...
	I0919 22:14:19.589643   19299 main.go:141] libmachine: (addons-266998) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21594-14764
	I0919 22:14:19.589649   19299 main.go:141] libmachine: (addons-266998) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0919 22:14:19.589655   19299 main.go:141] libmachine: (addons-266998) DBG | checking permissions on dir: /home/jenkins
	I0919 22:14:19.589659   19299 main.go:141] libmachine: (addons-266998) DBG | checking permissions on dir: /home
	I0919 22:14:19.589666   19299 main.go:141] libmachine: (addons-266998) DBG | skipping /home - not owner
	I0919 22:14:19.590841   19299 main.go:141] libmachine: (addons-266998) defining domain using XML: 
	I0919 22:14:19.590859   19299 main.go:141] libmachine: (addons-266998) <domain type='kvm'>
	I0919 22:14:19.590869   19299 main.go:141] libmachine: (addons-266998)   <name>addons-266998</name>
	I0919 22:14:19.590878   19299 main.go:141] libmachine: (addons-266998)   <memory unit='MiB'>4096</memory>
	I0919 22:14:19.590886   19299 main.go:141] libmachine: (addons-266998)   <vcpu>2</vcpu>
	I0919 22:14:19.590892   19299 main.go:141] libmachine: (addons-266998)   <features>
	I0919 22:14:19.590898   19299 main.go:141] libmachine: (addons-266998)     <acpi/>
	I0919 22:14:19.590904   19299 main.go:141] libmachine: (addons-266998)     <apic/>
	I0919 22:14:19.590911   19299 main.go:141] libmachine: (addons-266998)     <pae/>
	I0919 22:14:19.590936   19299 main.go:141] libmachine: (addons-266998)   </features>
	I0919 22:14:19.590950   19299 main.go:141] libmachine: (addons-266998)   <cpu mode='host-passthrough'>
	I0919 22:14:19.590956   19299 main.go:141] libmachine: (addons-266998)   </cpu>
	I0919 22:14:19.590983   19299 main.go:141] libmachine: (addons-266998)   <os>
	I0919 22:14:19.591004   19299 main.go:141] libmachine: (addons-266998)     <type>hvm</type>
	I0919 22:14:19.591020   19299 main.go:141] libmachine: (addons-266998)     <boot dev='cdrom'/>
	I0919 22:14:19.591035   19299 main.go:141] libmachine: (addons-266998)     <boot dev='hd'/>
	I0919 22:14:19.591045   19299 main.go:141] libmachine: (addons-266998)     <bootmenu enable='no'/>
	I0919 22:14:19.591052   19299 main.go:141] libmachine: (addons-266998)   </os>
	I0919 22:14:19.591060   19299 main.go:141] libmachine: (addons-266998)   <devices>
	I0919 22:14:19.591069   19299 main.go:141] libmachine: (addons-266998)     <disk type='file' device='cdrom'>
	I0919 22:14:19.591077   19299 main.go:141] libmachine: (addons-266998)       <source file='/home/jenkins/minikube-integration/21594-14764/.minikube/machines/addons-266998/boot2docker.iso'/>
	I0919 22:14:19.591084   19299 main.go:141] libmachine: (addons-266998)       <target dev='hdc' bus='scsi'/>
	I0919 22:14:19.591099   19299 main.go:141] libmachine: (addons-266998)       <readonly/>
	I0919 22:14:19.591105   19299 main.go:141] libmachine: (addons-266998)     </disk>
	I0919 22:14:19.591111   19299 main.go:141] libmachine: (addons-266998)     <disk type='file' device='disk'>
	I0919 22:14:19.591119   19299 main.go:141] libmachine: (addons-266998)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0919 22:14:19.591126   19299 main.go:141] libmachine: (addons-266998)       <source file='/home/jenkins/minikube-integration/21594-14764/.minikube/machines/addons-266998/addons-266998.rawdisk'/>
	I0919 22:14:19.591140   19299 main.go:141] libmachine: (addons-266998)       <target dev='hda' bus='virtio'/>
	I0919 22:14:19.591147   19299 main.go:141] libmachine: (addons-266998)     </disk>
	I0919 22:14:19.591152   19299 main.go:141] libmachine: (addons-266998)     <interface type='network'>
	I0919 22:14:19.591160   19299 main.go:141] libmachine: (addons-266998)       <source network='mk-addons-266998'/>
	I0919 22:14:19.591165   19299 main.go:141] libmachine: (addons-266998)       <model type='virtio'/>
	I0919 22:14:19.591172   19299 main.go:141] libmachine: (addons-266998)     </interface>
	I0919 22:14:19.591176   19299 main.go:141] libmachine: (addons-266998)     <interface type='network'>
	I0919 22:14:19.591184   19299 main.go:141] libmachine: (addons-266998)       <source network='default'/>
	I0919 22:14:19.591188   19299 main.go:141] libmachine: (addons-266998)       <model type='virtio'/>
	I0919 22:14:19.591198   19299 main.go:141] libmachine: (addons-266998)     </interface>
	I0919 22:14:19.591204   19299 main.go:141] libmachine: (addons-266998)     <serial type='pty'>
	I0919 22:14:19.591215   19299 main.go:141] libmachine: (addons-266998)       <target port='0'/>
	I0919 22:14:19.591225   19299 main.go:141] libmachine: (addons-266998)     </serial>
	I0919 22:14:19.591233   19299 main.go:141] libmachine: (addons-266998)     <console type='pty'>
	I0919 22:14:19.591240   19299 main.go:141] libmachine: (addons-266998)       <target type='serial' port='0'/>
	I0919 22:14:19.591247   19299 main.go:141] libmachine: (addons-266998)     </console>
	I0919 22:14:19.591253   19299 main.go:141] libmachine: (addons-266998)     <rng model='virtio'>
	I0919 22:14:19.591261   19299 main.go:141] libmachine: (addons-266998)       <backend model='random'>/dev/random</backend>
	I0919 22:14:19.591269   19299 main.go:141] libmachine: (addons-266998)     </rng>
	I0919 22:14:19.591276   19299 main.go:141] libmachine: (addons-266998)   </devices>
	I0919 22:14:19.591282   19299 main.go:141] libmachine: (addons-266998) </domain>
	I0919 22:14:19.591291   19299 main.go:141] libmachine: (addons-266998) 
	I0919 22:14:19.598183   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined MAC address 52:54:00:34:d9:8b in network default
	I0919 22:14:19.598770   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:14:19.598807   19299 main.go:141] libmachine: (addons-266998) starting domain...
	I0919 22:14:19.598831   19299 main.go:141] libmachine: (addons-266998) ensuring networks are active...
	I0919 22:14:19.599573   19299 main.go:141] libmachine: (addons-266998) Ensuring network default is active
	I0919 22:14:19.599981   19299 main.go:141] libmachine: (addons-266998) Ensuring network mk-addons-266998 is active
	I0919 22:14:19.600651   19299 main.go:141] libmachine: (addons-266998) getting domain XML...
	I0919 22:14:19.601769   19299 main.go:141] libmachine: (addons-266998) DBG | starting domain XML:
	I0919 22:14:19.601795   19299 main.go:141] libmachine: (addons-266998) DBG | <domain type='kvm'>
	I0919 22:14:19.601805   19299 main.go:141] libmachine: (addons-266998) DBG |   <name>addons-266998</name>
	I0919 22:14:19.601825   19299 main.go:141] libmachine: (addons-266998) DBG |   <uuid>b9d3364d-6c8e-40e4-bfa6-b562fb833eaa</uuid>
	I0919 22:14:19.601837   19299 main.go:141] libmachine: (addons-266998) DBG |   <memory unit='KiB'>4194304</memory>
	I0919 22:14:19.601851   19299 main.go:141] libmachine: (addons-266998) DBG |   <currentMemory unit='KiB'>4194304</currentMemory>
	I0919 22:14:19.601873   19299 main.go:141] libmachine: (addons-266998) DBG |   <vcpu placement='static'>2</vcpu>
	I0919 22:14:19.601890   19299 main.go:141] libmachine: (addons-266998) DBG |   <os>
	I0919 22:14:19.601897   19299 main.go:141] libmachine: (addons-266998) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I0919 22:14:19.601904   19299 main.go:141] libmachine: (addons-266998) DBG |     <boot dev='cdrom'/>
	I0919 22:14:19.601918   19299 main.go:141] libmachine: (addons-266998) DBG |     <boot dev='hd'/>
	I0919 22:14:19.601929   19299 main.go:141] libmachine: (addons-266998) DBG |     <bootmenu enable='no'/>
	I0919 22:14:19.601937   19299 main.go:141] libmachine: (addons-266998) DBG |   </os>
	I0919 22:14:19.601946   19299 main.go:141] libmachine: (addons-266998) DBG |   <features>
	I0919 22:14:19.601957   19299 main.go:141] libmachine: (addons-266998) DBG |     <acpi/>
	I0919 22:14:19.601966   19299 main.go:141] libmachine: (addons-266998) DBG |     <apic/>
	I0919 22:14:19.601972   19299 main.go:141] libmachine: (addons-266998) DBG |     <pae/>
	I0919 22:14:19.601980   19299 main.go:141] libmachine: (addons-266998) DBG |   </features>
	I0919 22:14:19.601986   19299 main.go:141] libmachine: (addons-266998) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I0919 22:14:19.601998   19299 main.go:141] libmachine: (addons-266998) DBG |   <clock offset='utc'/>
	I0919 22:14:19.602010   19299 main.go:141] libmachine: (addons-266998) DBG |   <on_poweroff>destroy</on_poweroff>
	I0919 22:14:19.602019   19299 main.go:141] libmachine: (addons-266998) DBG |   <on_reboot>restart</on_reboot>
	I0919 22:14:19.602042   19299 main.go:141] libmachine: (addons-266998) DBG |   <on_crash>destroy</on_crash>
	I0919 22:14:19.602052   19299 main.go:141] libmachine: (addons-266998) DBG |   <devices>
	I0919 22:14:19.602063   19299 main.go:141] libmachine: (addons-266998) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I0919 22:14:19.602073   19299 main.go:141] libmachine: (addons-266998) DBG |     <disk type='file' device='cdrom'>
	I0919 22:14:19.602097   19299 main.go:141] libmachine: (addons-266998) DBG |       <driver name='qemu' type='raw'/>
	I0919 22:14:19.602124   19299 main.go:141] libmachine: (addons-266998) DBG |       <source file='/home/jenkins/minikube-integration/21594-14764/.minikube/machines/addons-266998/boot2docker.iso'/>
	I0919 22:14:19.602139   19299 main.go:141] libmachine: (addons-266998) DBG |       <target dev='hdc' bus='scsi'/>
	I0919 22:14:19.602150   19299 main.go:141] libmachine: (addons-266998) DBG |       <readonly/>
	I0919 22:14:19.602165   19299 main.go:141] libmachine: (addons-266998) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I0919 22:14:19.602175   19299 main.go:141] libmachine: (addons-266998) DBG |     </disk>
	I0919 22:14:19.602186   19299 main.go:141] libmachine: (addons-266998) DBG |     <disk type='file' device='disk'>
	I0919 22:14:19.602203   19299 main.go:141] libmachine: (addons-266998) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I0919 22:14:19.602226   19299 main.go:141] libmachine: (addons-266998) DBG |       <source file='/home/jenkins/minikube-integration/21594-14764/.minikube/machines/addons-266998/addons-266998.rawdisk'/>
	I0919 22:14:19.602238   19299 main.go:141] libmachine: (addons-266998) DBG |       <target dev='hda' bus='virtio'/>
	I0919 22:14:19.602252   19299 main.go:141] libmachine: (addons-266998) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I0919 22:14:19.602263   19299 main.go:141] libmachine: (addons-266998) DBG |     </disk>
	I0919 22:14:19.602274   19299 main.go:141] libmachine: (addons-266998) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I0919 22:14:19.602287   19299 main.go:141] libmachine: (addons-266998) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I0919 22:14:19.602295   19299 main.go:141] libmachine: (addons-266998) DBG |     </controller>
	I0919 22:14:19.602309   19299 main.go:141] libmachine: (addons-266998) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I0919 22:14:19.602327   19299 main.go:141] libmachine: (addons-266998) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I0919 22:14:19.602342   19299 main.go:141] libmachine: (addons-266998) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I0919 22:14:19.602357   19299 main.go:141] libmachine: (addons-266998) DBG |     </controller>
	I0919 22:14:19.602373   19299 main.go:141] libmachine: (addons-266998) DBG |     <interface type='network'>
	I0919 22:14:19.602384   19299 main.go:141] libmachine: (addons-266998) DBG |       <mac address='52:54:00:25:32:d6'/>
	I0919 22:14:19.602393   19299 main.go:141] libmachine: (addons-266998) DBG |       <source network='mk-addons-266998'/>
	I0919 22:14:19.602401   19299 main.go:141] libmachine: (addons-266998) DBG |       <model type='virtio'/>
	I0919 22:14:19.602417   19299 main.go:141] libmachine: (addons-266998) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I0919 22:14:19.602428   19299 main.go:141] libmachine: (addons-266998) DBG |     </interface>
	I0919 22:14:19.602437   19299 main.go:141] libmachine: (addons-266998) DBG |     <interface type='network'>
	I0919 22:14:19.602447   19299 main.go:141] libmachine: (addons-266998) DBG |       <mac address='52:54:00:34:d9:8b'/>
	I0919 22:14:19.602463   19299 main.go:141] libmachine: (addons-266998) DBG |       <source network='default'/>
	I0919 22:14:19.602476   19299 main.go:141] libmachine: (addons-266998) DBG |       <model type='virtio'/>
	I0919 22:14:19.602486   19299 main.go:141] libmachine: (addons-266998) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I0919 22:14:19.602527   19299 main.go:141] libmachine: (addons-266998) DBG |     </interface>
	I0919 22:14:19.602546   19299 main.go:141] libmachine: (addons-266998) DBG |     <serial type='pty'>
	I0919 22:14:19.602558   19299 main.go:141] libmachine: (addons-266998) DBG |       <target type='isa-serial' port='0'>
	I0919 22:14:19.602563   19299 main.go:141] libmachine: (addons-266998) DBG |         <model name='isa-serial'/>
	I0919 22:14:19.602570   19299 main.go:141] libmachine: (addons-266998) DBG |       </target>
	I0919 22:14:19.602580   19299 main.go:141] libmachine: (addons-266998) DBG |     </serial>
	I0919 22:14:19.602589   19299 main.go:141] libmachine: (addons-266998) DBG |     <console type='pty'>
	I0919 22:14:19.602600   19299 main.go:141] libmachine: (addons-266998) DBG |       <target type='serial' port='0'/>
	I0919 22:14:19.602610   19299 main.go:141] libmachine: (addons-266998) DBG |     </console>
	I0919 22:14:19.602624   19299 main.go:141] libmachine: (addons-266998) DBG |     <input type='mouse' bus='ps2'/>
	I0919 22:14:19.602637   19299 main.go:141] libmachine: (addons-266998) DBG |     <input type='keyboard' bus='ps2'/>
	I0919 22:14:19.602646   19299 main.go:141] libmachine: (addons-266998) DBG |     <audio id='1' type='none'/>
	I0919 22:14:19.602652   19299 main.go:141] libmachine: (addons-266998) DBG |     <memballoon model='virtio'>
	I0919 22:14:19.602664   19299 main.go:141] libmachine: (addons-266998) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I0919 22:14:19.602676   19299 main.go:141] libmachine: (addons-266998) DBG |     </memballoon>
	I0919 22:14:19.602684   19299 main.go:141] libmachine: (addons-266998) DBG |     <rng model='virtio'>
	I0919 22:14:19.602715   19299 main.go:141] libmachine: (addons-266998) DBG |       <backend model='random'>/dev/random</backend>
	I0919 22:14:19.602748   19299 main.go:141] libmachine: (addons-266998) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I0919 22:14:19.602776   19299 main.go:141] libmachine: (addons-266998) DBG |     </rng>
	I0919 22:14:19.602788   19299 main.go:141] libmachine: (addons-266998) DBG |   </devices>
	I0919 22:14:19.602797   19299 main.go:141] libmachine: (addons-266998) DBG | </domain>
	I0919 22:14:19.602806   19299 main.go:141] libmachine: (addons-266998) DBG | 
	I0919 22:14:20.973323   19299 main.go:141] libmachine: (addons-266998) waiting for domain to start...
	I0919 22:14:20.974564   19299 main.go:141] libmachine: (addons-266998) domain is now running
	I0919 22:14:20.974586   19299 main.go:141] libmachine: (addons-266998) waiting for IP...
	I0919 22:14:20.975349   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:14:20.975800   19299 main.go:141] libmachine: (addons-266998) DBG | no network interface addresses found for domain addons-266998 (source=lease)
	I0919 22:14:20.975824   19299 main.go:141] libmachine: (addons-266998) DBG | trying to list again with source=arp
	I0919 22:14:20.976028   19299 main.go:141] libmachine: (addons-266998) DBG | unable to find current IP address of domain addons-266998 in network mk-addons-266998 (interfaces detected: [])
	I0919 22:14:20.976088   19299 main.go:141] libmachine: (addons-266998) DBG | I0919 22:14:20.976050   19327 retry.go:31] will retry after 259.097865ms: waiting for domain to come up
	I0919 22:14:21.236609   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:14:21.237250   19299 main.go:141] libmachine: (addons-266998) DBG | no network interface addresses found for domain addons-266998 (source=lease)
	I0919 22:14:21.237272   19299 main.go:141] libmachine: (addons-266998) DBG | trying to list again with source=arp
	I0919 22:14:21.237560   19299 main.go:141] libmachine: (addons-266998) DBG | unable to find current IP address of domain addons-266998 in network mk-addons-266998 (interfaces detected: [])
	I0919 22:14:21.237588   19299 main.go:141] libmachine: (addons-266998) DBG | I0919 22:14:21.237532   19327 retry.go:31] will retry after 246.382727ms: waiting for domain to come up
	I0919 22:14:21.485967   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:14:21.486346   19299 main.go:141] libmachine: (addons-266998) DBG | no network interface addresses found for domain addons-266998 (source=lease)
	I0919 22:14:21.486374   19299 main.go:141] libmachine: (addons-266998) DBG | trying to list again with source=arp
	I0919 22:14:21.486662   19299 main.go:141] libmachine: (addons-266998) DBG | unable to find current IP address of domain addons-266998 in network mk-addons-266998 (interfaces detected: [])
	I0919 22:14:21.486699   19299 main.go:141] libmachine: (addons-266998) DBG | I0919 22:14:21.486641   19327 retry.go:31] will retry after 408.407825ms: waiting for domain to come up
	I0919 22:14:21.896122   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:14:21.896650   19299 main.go:141] libmachine: (addons-266998) DBG | no network interface addresses found for domain addons-266998 (source=lease)
	I0919 22:14:21.896678   19299 main.go:141] libmachine: (addons-266998) DBG | trying to list again with source=arp
	I0919 22:14:21.896991   19299 main.go:141] libmachine: (addons-266998) DBG | unable to find current IP address of domain addons-266998 in network mk-addons-266998 (interfaces detected: [])
	I0919 22:14:21.897019   19299 main.go:141] libmachine: (addons-266998) DBG | I0919 22:14:21.896945   19327 retry.go:31] will retry after 519.349117ms: waiting for domain to come up
	I0919 22:14:22.417552   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:14:22.418042   19299 main.go:141] libmachine: (addons-266998) DBG | no network interface addresses found for domain addons-266998 (source=lease)
	I0919 22:14:22.418069   19299 main.go:141] libmachine: (addons-266998) DBG | trying to list again with source=arp
	I0919 22:14:22.418307   19299 main.go:141] libmachine: (addons-266998) DBG | unable to find current IP address of domain addons-266998 in network mk-addons-266998 (interfaces detected: [])
	I0919 22:14:22.418342   19299 main.go:141] libmachine: (addons-266998) DBG | I0919 22:14:22.418286   19327 retry.go:31] will retry after 655.710889ms: waiting for domain to come up
	I0919 22:14:23.075046   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:14:23.075475   19299 main.go:141] libmachine: (addons-266998) DBG | no network interface addresses found for domain addons-266998 (source=lease)
	I0919 22:14:23.075502   19299 main.go:141] libmachine: (addons-266998) DBG | trying to list again with source=arp
	I0919 22:14:23.075714   19299 main.go:141] libmachine: (addons-266998) DBG | unable to find current IP address of domain addons-266998 in network mk-addons-266998 (interfaces detected: [])
	I0919 22:14:23.075782   19299 main.go:141] libmachine: (addons-266998) DBG | I0919 22:14:23.075717   19327 retry.go:31] will retry after 901.479611ms: waiting for domain to come up
	I0919 22:14:23.978421   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:14:23.978958   19299 main.go:141] libmachine: (addons-266998) DBG | no network interface addresses found for domain addons-266998 (source=lease)
	I0919 22:14:23.978982   19299 main.go:141] libmachine: (addons-266998) DBG | trying to list again with source=arp
	I0919 22:14:23.979240   19299 main.go:141] libmachine: (addons-266998) DBG | unable to find current IP address of domain addons-266998 in network mk-addons-266998 (interfaces detected: [])
	I0919 22:14:23.979263   19299 main.go:141] libmachine: (addons-266998) DBG | I0919 22:14:23.979219   19327 retry.go:31] will retry after 836.52553ms: waiting for domain to come up
	I0919 22:14:24.817142   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:14:24.817643   19299 main.go:141] libmachine: (addons-266998) DBG | no network interface addresses found for domain addons-266998 (source=lease)
	I0919 22:14:24.817672   19299 main.go:141] libmachine: (addons-266998) DBG | trying to list again with source=arp
	I0919 22:14:24.817927   19299 main.go:141] libmachine: (addons-266998) DBG | unable to find current IP address of domain addons-266998 in network mk-addons-266998 (interfaces detected: [])
	I0919 22:14:24.817952   19299 main.go:141] libmachine: (addons-266998) DBG | I0919 22:14:24.817906   19327 retry.go:31] will retry after 962.377106ms: waiting for domain to come up
	I0919 22:14:25.782204   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:14:25.782712   19299 main.go:141] libmachine: (addons-266998) DBG | no network interface addresses found for domain addons-266998 (source=lease)
	I0919 22:14:25.782750   19299 main.go:141] libmachine: (addons-266998) DBG | trying to list again with source=arp
	I0919 22:14:25.782970   19299 main.go:141] libmachine: (addons-266998) DBG | unable to find current IP address of domain addons-266998 in network mk-addons-266998 (interfaces detected: [])
	I0919 22:14:25.783022   19299 main.go:141] libmachine: (addons-266998) DBG | I0919 22:14:25.782972   19327 retry.go:31] will retry after 1.349506655s: waiting for domain to come up
	I0919 22:14:27.134789   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:14:27.135220   19299 main.go:141] libmachine: (addons-266998) DBG | no network interface addresses found for domain addons-266998 (source=lease)
	I0919 22:14:27.135247   19299 main.go:141] libmachine: (addons-266998) DBG | trying to list again with source=arp
	I0919 22:14:27.135645   19299 main.go:141] libmachine: (addons-266998) DBG | unable to find current IP address of domain addons-266998 in network mk-addons-266998 (interfaces detected: [])
	I0919 22:14:27.135671   19299 main.go:141] libmachine: (addons-266998) DBG | I0919 22:14:27.135624   19327 retry.go:31] will retry after 1.911476802s: waiting for domain to come up
	I0919 22:14:29.048539   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:14:29.049159   19299 main.go:141] libmachine: (addons-266998) DBG | no network interface addresses found for domain addons-266998 (source=lease)
	I0919 22:14:29.049188   19299 main.go:141] libmachine: (addons-266998) DBG | trying to list again with source=arp
	I0919 22:14:29.049541   19299 main.go:141] libmachine: (addons-266998) DBG | unable to find current IP address of domain addons-266998 in network mk-addons-266998 (interfaces detected: [])
	I0919 22:14:29.049569   19299 main.go:141] libmachine: (addons-266998) DBG | I0919 22:14:29.049517   19327 retry.go:31] will retry after 2.671747697s: waiting for domain to come up
	I0919 22:14:31.723992   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:14:31.724496   19299 main.go:141] libmachine: (addons-266998) DBG | no network interface addresses found for domain addons-266998 (source=lease)
	I0919 22:14:31.724538   19299 main.go:141] libmachine: (addons-266998) DBG | trying to list again with source=arp
	I0919 22:14:31.724809   19299 main.go:141] libmachine: (addons-266998) DBG | unable to find current IP address of domain addons-266998 in network mk-addons-266998 (interfaces detected: [])
	I0919 22:14:31.724859   19299 main.go:141] libmachine: (addons-266998) DBG | I0919 22:14:31.724811   19327 retry.go:31] will retry after 2.883386275s: waiting for domain to come up
	I0919 22:14:34.609530   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:14:34.610055   19299 main.go:141] libmachine: (addons-266998) DBG | no network interface addresses found for domain addons-266998 (source=lease)
	I0919 22:14:34.610084   19299 main.go:141] libmachine: (addons-266998) DBG | trying to list again with source=arp
	I0919 22:14:34.610374   19299 main.go:141] libmachine: (addons-266998) DBG | unable to find current IP address of domain addons-266998 in network mk-addons-266998 (interfaces detected: [])
	I0919 22:14:34.610403   19299 main.go:141] libmachine: (addons-266998) DBG | I0919 22:14:34.610333   19327 retry.go:31] will retry after 3.687003642s: waiting for domain to come up
	I0919 22:14:38.301245   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:14:38.301822   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has current primary IP address 192.168.39.205 and MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:14:38.301842   19299 main.go:141] libmachine: (addons-266998) found domain IP: 192.168.39.205
	I0919 22:14:38.301882   19299 main.go:141] libmachine: (addons-266998) reserving static IP address...
	I0919 22:14:38.302249   19299 main.go:141] libmachine: (addons-266998) DBG | unable to find host DHCP lease matching {name: "addons-266998", mac: "52:54:00:25:32:d6", ip: "192.168.39.205"} in network mk-addons-266998
	I0919 22:14:38.504826   19299 main.go:141] libmachine: (addons-266998) reserved static IP address 192.168.39.205 for domain addons-266998
	I0919 22:14:38.504849   19299 main.go:141] libmachine: (addons-266998) waiting for SSH...
	I0919 22:14:38.504868   19299 main.go:141] libmachine: (addons-266998) DBG | Getting to WaitForSSH function...
	I0919 22:14:38.507908   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:14:38.508414   19299 main.go:141] libmachine: (addons-266998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:32:d6", ip: ""} in network mk-addons-266998: {Iface:virbr1 ExpiryTime:2025-09-19 23:14:35 +0000 UTC Type:0 Mac:52:54:00:25:32:d6 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:minikube Clientid:01:52:54:00:25:32:d6}
	I0919 22:14:38.508447   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined IP address 192.168.39.205 and MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:14:38.508599   19299 main.go:141] libmachine: (addons-266998) DBG | Using SSH client type: external
	I0919 22:14:38.508620   19299 main.go:141] libmachine: (addons-266998) DBG | Using SSH private key: /home/jenkins/minikube-integration/21594-14764/.minikube/machines/addons-266998/id_rsa (-rw-------)
	I0919 22:14:38.508693   19299 main.go:141] libmachine: (addons-266998) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.205 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21594-14764/.minikube/machines/addons-266998/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0919 22:14:38.508710   19299 main.go:141] libmachine: (addons-266998) DBG | About to run SSH command:
	I0919 22:14:38.508722   19299 main.go:141] libmachine: (addons-266998) DBG | exit 0
	I0919 22:14:38.647115   19299 main.go:141] libmachine: (addons-266998) DBG | SSH cmd err, output: <nil>: 
	I0919 22:14:38.647453   19299 main.go:141] libmachine: (addons-266998) domain creation complete
	I0919 22:14:38.647926   19299 main.go:141] libmachine: (addons-266998) Calling .GetConfigRaw
	I0919 22:14:38.648593   19299 main.go:141] libmachine: (addons-266998) Calling .DriverName
	I0919 22:14:38.648814   19299 main.go:141] libmachine: (addons-266998) Calling .DriverName
	I0919 22:14:38.648999   19299 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0919 22:14:38.649013   19299 main.go:141] libmachine: (addons-266998) Calling .GetState
	I0919 22:14:38.650423   19299 main.go:141] libmachine: Detecting operating system of created instance...
	I0919 22:14:38.650437   19299 main.go:141] libmachine: Waiting for SSH to be available...
	I0919 22:14:38.650459   19299 main.go:141] libmachine: Getting to WaitForSSH function...
	I0919 22:14:38.650468   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHHostname
	I0919 22:14:38.653290   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:14:38.653712   19299 main.go:141] libmachine: (addons-266998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:32:d6", ip: ""} in network mk-addons-266998: {Iface:virbr1 ExpiryTime:2025-09-19 23:14:35 +0000 UTC Type:0 Mac:52:54:00:25:32:d6 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-266998 Clientid:01:52:54:00:25:32:d6}
	I0919 22:14:38.653751   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined IP address 192.168.39.205 and MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:14:38.653916   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHPort
	I0919 22:14:38.654089   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHKeyPath
	I0919 22:14:38.654362   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHKeyPath
	I0919 22:14:38.654609   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHUsername
	I0919 22:14:38.654836   19299 main.go:141] libmachine: Using SSH client type: native
	I0919 22:14:38.655142   19299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I0919 22:14:38.655157   19299 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0919 22:14:38.758580   19299 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:14:38.758611   19299 main.go:141] libmachine: Detecting the provisioner...
	I0919 22:14:38.758622   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHHostname
	I0919 22:14:38.762271   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:14:38.762765   19299 main.go:141] libmachine: (addons-266998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:32:d6", ip: ""} in network mk-addons-266998: {Iface:virbr1 ExpiryTime:2025-09-19 23:14:35 +0000 UTC Type:0 Mac:52:54:00:25:32:d6 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-266998 Clientid:01:52:54:00:25:32:d6}
	I0919 22:14:38.762792   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined IP address 192.168.39.205 and MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:14:38.762969   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHPort
	I0919 22:14:38.763198   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHKeyPath
	I0919 22:14:38.763377   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHKeyPath
	I0919 22:14:38.763552   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHUsername
	I0919 22:14:38.763754   19299 main.go:141] libmachine: Using SSH client type: native
	I0919 22:14:38.763949   19299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I0919 22:14:38.763961   19299 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0919 22:14:38.867611   19299 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I0919 22:14:38.867717   19299 main.go:141] libmachine: found compatible host: buildroot
	I0919 22:14:38.867759   19299 main.go:141] libmachine: Provisioning with buildroot...
	I0919 22:14:38.867774   19299 main.go:141] libmachine: (addons-266998) Calling .GetMachineName
	I0919 22:14:38.868018   19299 buildroot.go:166] provisioning hostname "addons-266998"
	I0919 22:14:38.868048   19299 main.go:141] libmachine: (addons-266998) Calling .GetMachineName
	I0919 22:14:38.868248   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHHostname
	I0919 22:14:38.871296   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:14:38.871781   19299 main.go:141] libmachine: (addons-266998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:32:d6", ip: ""} in network mk-addons-266998: {Iface:virbr1 ExpiryTime:2025-09-19 23:14:35 +0000 UTC Type:0 Mac:52:54:00:25:32:d6 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-266998 Clientid:01:52:54:00:25:32:d6}
	I0919 22:14:38.871812   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined IP address 192.168.39.205 and MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:14:38.871978   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHPort
	I0919 22:14:38.872194   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHKeyPath
	I0919 22:14:38.872379   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHKeyPath
	I0919 22:14:38.872515   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHUsername
	I0919 22:14:38.872662   19299 main.go:141] libmachine: Using SSH client type: native
	I0919 22:14:38.872860   19299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I0919 22:14:38.872871   19299 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-266998 && echo "addons-266998" | sudo tee /etc/hostname
	I0919 22:14:38.991385   19299 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-266998
	
	I0919 22:14:38.991420   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHHostname
	I0919 22:14:38.994987   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:14:38.995474   19299 main.go:141] libmachine: (addons-266998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:32:d6", ip: ""} in network mk-addons-266998: {Iface:virbr1 ExpiryTime:2025-09-19 23:14:35 +0000 UTC Type:0 Mac:52:54:00:25:32:d6 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-266998 Clientid:01:52:54:00:25:32:d6}
	I0919 22:14:38.995503   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined IP address 192.168.39.205 and MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:14:38.995692   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHPort
	I0919 22:14:38.995900   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHKeyPath
	I0919 22:14:38.996059   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHKeyPath
	I0919 22:14:38.996228   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHUsername
	I0919 22:14:38.996381   19299 main.go:141] libmachine: Using SSH client type: native
	I0919 22:14:38.996605   19299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I0919 22:14:38.996621   19299 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-266998' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-266998/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-266998' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:14:39.115712   19299 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:14:39.115767   19299 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14764/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14764/.minikube}
	I0919 22:14:39.115791   19299 buildroot.go:174] setting up certificates
	I0919 22:14:39.115804   19299 provision.go:84] configureAuth start
	I0919 22:14:39.115816   19299 main.go:141] libmachine: (addons-266998) Calling .GetMachineName
	I0919 22:14:39.116083   19299 main.go:141] libmachine: (addons-266998) Calling .GetIP
	I0919 22:14:39.119372   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:14:39.119872   19299 main.go:141] libmachine: (addons-266998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:32:d6", ip: ""} in network mk-addons-266998: {Iface:virbr1 ExpiryTime:2025-09-19 23:14:35 +0000 UTC Type:0 Mac:52:54:00:25:32:d6 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-266998 Clientid:01:52:54:00:25:32:d6}
	I0919 22:14:39.119905   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined IP address 192.168.39.205 and MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:14:39.120083   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHHostname
	I0919 22:14:39.122955   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:14:39.123361   19299 main.go:141] libmachine: (addons-266998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:32:d6", ip: ""} in network mk-addons-266998: {Iface:virbr1 ExpiryTime:2025-09-19 23:14:35 +0000 UTC Type:0 Mac:52:54:00:25:32:d6 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-266998 Clientid:01:52:54:00:25:32:d6}
	I0919 22:14:39.123389   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined IP address 192.168.39.205 and MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:14:39.123563   19299 provision.go:143] copyHostCerts
	I0919 22:14:39.123647   19299 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14764/.minikube/ca.pem (1082 bytes)
	I0919 22:14:39.123791   19299 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14764/.minikube/cert.pem (1123 bytes)
	I0919 22:14:39.123867   19299 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14764/.minikube/key.pem (1679 bytes)
	I0919 22:14:39.123919   19299 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14764/.minikube/certs/ca-key.pem org=jenkins.addons-266998 san=[127.0.0.1 192.168.39.205 addons-266998 localhost minikube]
	I0919 22:14:39.539249   19299 provision.go:177] copyRemoteCerts
	I0919 22:14:39.539310   19299 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:14:39.539332   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHHostname
	I0919 22:14:39.542683   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:14:39.543131   19299 main.go:141] libmachine: (addons-266998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:32:d6", ip: ""} in network mk-addons-266998: {Iface:virbr1 ExpiryTime:2025-09-19 23:14:35 +0000 UTC Type:0 Mac:52:54:00:25:32:d6 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-266998 Clientid:01:52:54:00:25:32:d6}
	I0919 22:14:39.543163   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined IP address 192.168.39.205 and MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:14:39.543372   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHPort
	I0919 22:14:39.543625   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHKeyPath
	I0919 22:14:39.543794   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHUsername
	I0919 22:14:39.543988   19299 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21594-14764/.minikube/machines/addons-266998/id_rsa Username:docker}
	I0919 22:14:39.626974   19299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 22:14:39.659022   19299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14764/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 22:14:39.690262   19299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 22:14:39.721894   19299 provision.go:87] duration metric: took 606.077075ms to configureAuth
	I0919 22:14:39.721921   19299 buildroot.go:189] setting minikube options for container-runtime
	I0919 22:14:39.722073   19299 config.go:182] Loaded profile config "addons-266998": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:14:39.722140   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHHostname
	I0919 22:14:39.725462   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:14:39.725861   19299 main.go:141] libmachine: (addons-266998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:32:d6", ip: ""} in network mk-addons-266998: {Iface:virbr1 ExpiryTime:2025-09-19 23:14:35 +0000 UTC Type:0 Mac:52:54:00:25:32:d6 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-266998 Clientid:01:52:54:00:25:32:d6}
	I0919 22:14:39.725890   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined IP address 192.168.39.205 and MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:14:39.726070   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHPort
	I0919 22:14:39.726289   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHKeyPath
	I0919 22:14:39.726503   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHKeyPath
	I0919 22:14:39.726664   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHUsername
	I0919 22:14:39.726885   19299 main.go:141] libmachine: Using SSH client type: native
	I0919 22:14:39.727086   19299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I0919 22:14:39.727101   19299 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 22:14:39.973779   19299 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 22:14:39.973807   19299 main.go:141] libmachine: Checking connection to Docker...
	I0919 22:14:39.973814   19299 main.go:141] libmachine: (addons-266998) Calling .GetURL
	I0919 22:14:39.975456   19299 main.go:141] libmachine: (addons-266998) DBG | using libvirt version 8000000
	I0919 22:14:39.978373   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:14:39.978782   19299 main.go:141] libmachine: (addons-266998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:32:d6", ip: ""} in network mk-addons-266998: {Iface:virbr1 ExpiryTime:2025-09-19 23:14:35 +0000 UTC Type:0 Mac:52:54:00:25:32:d6 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-266998 Clientid:01:52:54:00:25:32:d6}
	I0919 22:14:39.978810   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined IP address 192.168.39.205 and MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:14:39.978984   19299 main.go:141] libmachine: Docker is up and running!
	I0919 22:14:39.979001   19299 main.go:141] libmachine: Reticulating splines...
	I0919 22:14:39.979009   19299 client.go:171] duration metric: took 21.381491623s to LocalClient.Create
	I0919 22:14:39.979032   19299 start.go:167] duration metric: took 21.381567422s to libmachine.API.Create "addons-266998"
	I0919 22:14:39.979042   19299 start.go:293] postStartSetup for "addons-266998" (driver="kvm2")
	I0919 22:14:39.979049   19299 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:14:39.979067   19299 main.go:141] libmachine: (addons-266998) Calling .DriverName
	I0919 22:14:39.979279   19299 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:14:39.979297   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHHostname
	I0919 22:14:39.981750   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:14:39.982151   19299 main.go:141] libmachine: (addons-266998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:32:d6", ip: ""} in network mk-addons-266998: {Iface:virbr1 ExpiryTime:2025-09-19 23:14:35 +0000 UTC Type:0 Mac:52:54:00:25:32:d6 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-266998 Clientid:01:52:54:00:25:32:d6}
	I0919 22:14:39.982178   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined IP address 192.168.39.205 and MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:14:39.982361   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHPort
	I0919 22:14:39.982584   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHKeyPath
	I0919 22:14:39.982763   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHUsername
	I0919 22:14:39.982920   19299 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21594-14764/.minikube/machines/addons-266998/id_rsa Username:docker}
	I0919 22:14:40.066007   19299 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:14:40.071219   19299 info.go:137] Remote host: Buildroot 2025.02
	I0919 22:14:40.071253   19299 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14764/.minikube/addons for local assets ...
	I0919 22:14:40.071335   19299 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14764/.minikube/files for local assets ...
	I0919 22:14:40.071369   19299 start.go:296] duration metric: took 92.322219ms for postStartSetup
	I0919 22:14:40.071470   19299 main.go:141] libmachine: (addons-266998) Calling .GetConfigRaw
	I0919 22:14:40.072153   19299 main.go:141] libmachine: (addons-266998) Calling .GetIP
	I0919 22:14:40.074894   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:14:40.075252   19299 main.go:141] libmachine: (addons-266998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:32:d6", ip: ""} in network mk-addons-266998: {Iface:virbr1 ExpiryTime:2025-09-19 23:14:35 +0000 UTC Type:0 Mac:52:54:00:25:32:d6 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-266998 Clientid:01:52:54:00:25:32:d6}
	I0919 22:14:40.075283   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined IP address 192.168.39.205 and MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:14:40.075526   19299 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/addons-266998/config.json ...
	I0919 22:14:40.075707   19299 start.go:128] duration metric: took 21.494577473s to createHost
	I0919 22:14:40.075745   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHHostname
	I0919 22:14:40.078547   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:14:40.078880   19299 main.go:141] libmachine: (addons-266998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:32:d6", ip: ""} in network mk-addons-266998: {Iface:virbr1 ExpiryTime:2025-09-19 23:14:35 +0000 UTC Type:0 Mac:52:54:00:25:32:d6 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-266998 Clientid:01:52:54:00:25:32:d6}
	I0919 22:14:40.078909   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined IP address 192.168.39.205 and MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:14:40.079047   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHPort
	I0919 22:14:40.079218   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHKeyPath
	I0919 22:14:40.079400   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHKeyPath
	I0919 22:14:40.079612   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHUsername
	I0919 22:14:40.079805   19299 main.go:141] libmachine: Using SSH client type: native
	I0919 22:14:40.079985   19299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.205 22 <nil> <nil>}
	I0919 22:14:40.079995   19299 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0919 22:14:40.183314   19299 main.go:141] libmachine: SSH cmd err, output: <nil>: 1758320080.145662559
	
	I0919 22:14:40.183335   19299 fix.go:216] guest clock: 1758320080.145662559
	I0919 22:14:40.183342   19299 fix.go:229] Guest: 2025-09-19 22:14:40.145662559 +0000 UTC Remote: 2025-09-19 22:14:40.075717884 +0000 UTC m=+21.605072539 (delta=69.944675ms)
	I0919 22:14:40.183361   19299 fix.go:200] guest clock delta is within tolerance: 69.944675ms
	I0919 22:14:40.183366   19299 start.go:83] releasing machines lock for "addons-266998", held for 21.602300244s
	I0919 22:14:40.183385   19299 main.go:141] libmachine: (addons-266998) Calling .DriverName
	I0919 22:14:40.183663   19299 main.go:141] libmachine: (addons-266998) Calling .GetIP
	I0919 22:14:40.186748   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:14:40.187197   19299 main.go:141] libmachine: (addons-266998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:32:d6", ip: ""} in network mk-addons-266998: {Iface:virbr1 ExpiryTime:2025-09-19 23:14:35 +0000 UTC Type:0 Mac:52:54:00:25:32:d6 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-266998 Clientid:01:52:54:00:25:32:d6}
	I0919 22:14:40.187219   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined IP address 192.168.39.205 and MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:14:40.187398   19299 main.go:141] libmachine: (addons-266998) Calling .DriverName
	I0919 22:14:40.187949   19299 main.go:141] libmachine: (addons-266998) Calling .DriverName
	I0919 22:14:40.188164   19299 main.go:141] libmachine: (addons-266998) Calling .DriverName
	I0919 22:14:40.188269   19299 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:14:40.188310   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHHostname
	I0919 22:14:40.188397   19299 ssh_runner.go:195] Run: cat /version.json
	I0919 22:14:40.188420   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHHostname
	I0919 22:14:40.191584   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:14:40.191791   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:14:40.191979   19299 main.go:141] libmachine: (addons-266998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:32:d6", ip: ""} in network mk-addons-266998: {Iface:virbr1 ExpiryTime:2025-09-19 23:14:35 +0000 UTC Type:0 Mac:52:54:00:25:32:d6 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-266998 Clientid:01:52:54:00:25:32:d6}
	I0919 22:14:40.191999   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined IP address 192.168.39.205 and MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:14:40.192185   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHPort
	I0919 22:14:40.192368   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHKeyPath
	I0919 22:14:40.192379   19299 main.go:141] libmachine: (addons-266998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:32:d6", ip: ""} in network mk-addons-266998: {Iface:virbr1 ExpiryTime:2025-09-19 23:14:35 +0000 UTC Type:0 Mac:52:54:00:25:32:d6 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-266998 Clientid:01:52:54:00:25:32:d6}
	I0919 22:14:40.192400   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined IP address 192.168.39.205 and MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:14:40.192500   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHUsername
	I0919 22:14:40.192584   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHPort
	I0919 22:14:40.192666   19299 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21594-14764/.minikube/machines/addons-266998/id_rsa Username:docker}
	I0919 22:14:40.192713   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHKeyPath
	I0919 22:14:40.192860   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHUsername
	I0919 22:14:40.193023   19299 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21594-14764/.minikube/machines/addons-266998/id_rsa Username:docker}
	I0919 22:14:40.269915   19299 ssh_runner.go:195] Run: systemctl --version
	I0919 22:14:40.301457   19299 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 22:14:40.462976   19299 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0919 22:14:40.470519   19299 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0919 22:14:40.470580   19299 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:14:40.492055   19299 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0919 22:14:40.492078   19299 start.go:495] detecting cgroup driver to use...
	I0919 22:14:40.492138   19299 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:14:40.511422   19299 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:14:40.528770   19299 docker.go:218] disabling cri-docker service (if available) ...
	I0919 22:14:40.528825   19299 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 22:14:40.547345   19299 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 22:14:40.564215   19299 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 22:14:40.712896   19299 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 22:14:40.921939   19299 docker.go:234] disabling docker service ...
	I0919 22:14:40.922010   19299 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 22:14:40.938843   19299 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 22:14:40.954872   19299 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 22:14:41.118804   19299 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 22:14:41.258287   19299 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:14:41.274603   19299 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:14:41.297669   19299 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0919 22:14:41.297761   19299 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:14:41.310743   19299 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0919 22:14:41.310802   19299 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:14:41.323582   19299 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:14:41.336459   19299 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:14:41.351310   19299 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:14:41.365504   19299 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:14:41.378076   19299 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:14:41.399415   19299 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:14:41.412434   19299 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:14:41.423318   19299 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0919 22:14:41.423393   19299 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0919 22:14:41.445199   19299 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:14:41.457540   19299 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:14:41.600800   19299 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 22:14:41.724662   19299 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 22:14:41.724776   19299 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 22:14:41.731171   19299 start.go:563] Will wait 60s for crictl version
	I0919 22:14:41.731253   19299 ssh_runner.go:195] Run: which crictl
	I0919 22:14:41.735437   19299 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:14:41.778953   19299 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0919 22:14:41.779100   19299 ssh_runner.go:195] Run: crio --version
	I0919 22:14:41.812483   19299 ssh_runner.go:195] Run: crio --version
	I0919 22:14:41.984005   19299 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.29.1 ...
	I0919 22:14:42.046521   19299 main.go:141] libmachine: (addons-266998) Calling .GetIP
	I0919 22:14:42.049738   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:14:42.050105   19299 main.go:141] libmachine: (addons-266998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:32:d6", ip: ""} in network mk-addons-266998: {Iface:virbr1 ExpiryTime:2025-09-19 23:14:35 +0000 UTC Type:0 Mac:52:54:00:25:32:d6 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-266998 Clientid:01:52:54:00:25:32:d6}
	I0919 22:14:42.050137   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined IP address 192.168.39.205 and MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:14:42.050366   19299 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0919 22:14:42.055517   19299 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:14:42.071747   19299 kubeadm.go:875] updating cluster {Name:addons-266998 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.37.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-266998 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.205 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCo
reDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 22:14:42.071888   19299 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 22:14:42.071942   19299 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 22:14:42.108547   19299 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.0". assuming images are not preloaded.
	I0919 22:14:42.108642   19299 ssh_runner.go:195] Run: which lz4
	I0919 22:14:42.113708   19299 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0919 22:14:42.119089   19299 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0919 22:14:42.119122   19299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409455026 bytes)
	I0919 22:14:43.720875   19299 crio.go:462] duration metric: took 1.607223619s to copy over tarball
	I0919 22:14:43.720973   19299 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0919 22:14:45.367567   19299 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.646559199s)
	I0919 22:14:45.367603   19299 crio.go:469] duration metric: took 1.646701997s to extract the tarball
	I0919 22:14:45.367611   19299 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0919 22:14:45.409485   19299 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 22:14:45.455570   19299 crio.go:514] all images are preloaded for cri-o runtime.
	I0919 22:14:45.455602   19299 cache_images.go:85] Images are preloaded, skipping loading
	I0919 22:14:45.455612   19299 kubeadm.go:926] updating node { 192.168.39.205 8443 v1.34.0 crio true true} ...
	I0919 22:14:45.455744   19299 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-266998 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.205
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:addons-266998 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:14:45.455821   19299 ssh_runner.go:195] Run: crio config
	I0919 22:14:45.501786   19299 cni.go:84] Creating CNI manager for ""
	I0919 22:14:45.501815   19299 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0919 22:14:45.501830   19299 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 22:14:45.501859   19299 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.205 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-266998 NodeName:addons-266998 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.205"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.205 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 22:14:45.502008   19299 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.205
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-266998"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.205"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.205"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 22:14:45.502088   19299 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:14:45.514430   19299 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:14:45.514499   19299 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0919 22:14:45.526426   19299 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0919 22:14:45.548295   19299 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:14:45.569864   19299 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I0919 22:14:45.591736   19299 ssh_runner.go:195] Run: grep 192.168.39.205	control-plane.minikube.internal$ /etc/hosts
	I0919 22:14:45.596077   19299 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.205	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:14:45.611701   19299 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:14:45.755497   19299 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:14:45.777210   19299 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/addons-266998 for IP: 192.168.39.205
	I0919 22:14:45.777238   19299 certs.go:194] generating shared ca certs ...
	I0919 22:14:45.777260   19299 certs.go:226] acquiring lock for ca certs: {Name:mk1fe71ea89348ba0bd576e99c774a344fba186e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:14:45.777459   19299 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14764/.minikube/ca.key
	I0919 22:14:45.897595   19299 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14764/.minikube/ca.crt ...
	I0919 22:14:45.897625   19299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14764/.minikube/ca.crt: {Name:mk4cec8f081f0644781d01c8e2dbfd971fef311d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:14:45.897804   19299 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14764/.minikube/ca.key ...
	I0919 22:14:45.897816   19299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14764/.minikube/ca.key: {Name:mk058b63f520f62421342917bbd22adc69e184c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:14:45.897893   19299 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14764/.minikube/proxy-client-ca.key
	I0919 22:14:46.095762   19299 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14764/.minikube/proxy-client-ca.crt ...
	I0919 22:14:46.095792   19299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14764/.minikube/proxy-client-ca.crt: {Name:mkdccd107a3f63208eda6376b619a062429fb1fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:14:46.095956   19299 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14764/.minikube/proxy-client-ca.key ...
	I0919 22:14:46.095966   19299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14764/.minikube/proxy-client-ca.key: {Name:mk7910c6f35531ee10969908cfff3e93610e3d5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:14:46.096058   19299 certs.go:256] generating profile certs ...
	I0919 22:14:46.096115   19299 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/addons-266998/client.key
	I0919 22:14:46.096128   19299 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/addons-266998/client.crt with IP's: []
	I0919 22:14:46.638156   19299 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/addons-266998/client.crt ...
	I0919 22:14:46.638188   19299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/addons-266998/client.crt: {Name:mk9eea7434e93cd50023fdc0664994de9be42443 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:14:46.638350   19299 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/addons-266998/client.key ...
	I0919 22:14:46.638361   19299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/addons-266998/client.key: {Name:mk40a36c99a9a6537a94efa048696ba93288ae4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:14:46.638428   19299 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/addons-266998/apiserver.key.9b67aee4
	I0919 22:14:46.638445   19299 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/addons-266998/apiserver.crt.9b67aee4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.205]
	I0919 22:14:46.681478   19299 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/addons-266998/apiserver.crt.9b67aee4 ...
	I0919 22:14:46.681506   19299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/addons-266998/apiserver.crt.9b67aee4: {Name:mk0d122113a35687652067db209ea4ff42405194 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:14:46.681656   19299 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/addons-266998/apiserver.key.9b67aee4 ...
	I0919 22:14:46.681668   19299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/addons-266998/apiserver.key.9b67aee4: {Name:mk83c617d4d578ed3232515a5bb664f292f15aff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:14:46.681746   19299 certs.go:381] copying /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/addons-266998/apiserver.crt.9b67aee4 -> /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/addons-266998/apiserver.crt
	I0919 22:14:46.681843   19299 certs.go:385] copying /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/addons-266998/apiserver.key.9b67aee4 -> /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/addons-266998/apiserver.key
	I0919 22:14:46.681897   19299 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/addons-266998/proxy-client.key
	I0919 22:14:46.681914   19299 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/addons-266998/proxy-client.crt with IP's: []
	I0919 22:14:47.052772   19299 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/addons-266998/proxy-client.crt ...
	I0919 22:14:47.052801   19299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/addons-266998/proxy-client.crt: {Name:mk2474974195611c2b00daf3bf520cca7a2f6da0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:14:47.052956   19299 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/addons-266998/proxy-client.key ...
	I0919 22:14:47.052967   19299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/addons-266998/proxy-client.key: {Name:mk44817c3898a4bb1ff542bdb18986b74b60873e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:14:47.053140   19299 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14764/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:14:47.053173   19299 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14764/.minikube/certs/ca.pem (1082 bytes)
	I0919 22:14:47.053204   19299 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14764/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:14:47.053225   19299 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14764/.minikube/certs/key.pem (1679 bytes)
	I0919 22:14:47.053857   19299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:14:47.088046   19299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0919 22:14:47.119747   19299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:14:47.150608   19299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:14:47.180983   19299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/addons-266998/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0919 22:14:47.211828   19299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/addons-266998/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:14:47.242528   19299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/addons-266998/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:14:47.272999   19299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/addons-266998/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:14:47.303283   19299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:14:47.332994   19299 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 22:14:47.353845   19299 ssh_runner.go:195] Run: openssl version
	I0919 22:14:47.360257   19299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:14:47.373769   19299 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:14:47.379347   19299 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:14 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:14:47.379397   19299 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:14:47.387195   19299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:14:47.403589   19299 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:14:47.410757   19299 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 22:14:47.410810   19299 kubeadm.go:392] StartCluster: {Name:addons-266998 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.37.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-266998 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.205 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreD
NSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:14:47.410895   19299 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0919 22:14:47.410946   19299 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 22:14:47.458063   19299 cri.go:89] found id: ""
	I0919 22:14:47.458127   19299 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 22:14:47.472782   19299 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 22:14:47.485464   19299 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 22:14:47.498651   19299 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 22:14:47.498685   19299 kubeadm.go:157] found existing configuration files:
	
	I0919 22:14:47.498745   19299 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0919 22:14:47.511106   19299 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0919 22:14:47.511169   19299 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0919 22:14:47.524455   19299 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0919 22:14:47.535982   19299 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0919 22:14:47.536036   19299 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0919 22:14:47.549275   19299 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0919 22:14:47.561469   19299 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0919 22:14:47.561542   19299 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0919 22:14:47.574701   19299 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0919 22:14:47.587227   19299 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0919 22:14:47.587279   19299 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0919 22:14:47.600042   19299 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0919 22:14:47.769524   19299 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 22:15:00.632114   19299 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0919 22:15:00.632205   19299 kubeadm.go:310] [preflight] Running pre-flight checks
	I0919 22:15:00.632305   19299 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 22:15:00.632427   19299 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 22:15:00.632570   19299 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0919 22:15:00.632669   19299 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 22:15:00.634399   19299 out.go:252]   - Generating certificates and keys ...
	I0919 22:15:00.634495   19299 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0919 22:15:00.634565   19299 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0919 22:15:00.634643   19299 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0919 22:15:00.634709   19299 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0919 22:15:00.634799   19299 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0919 22:15:00.634851   19299 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0919 22:15:00.634895   19299 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0919 22:15:00.635015   19299 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-266998 localhost] and IPs [192.168.39.205 127.0.0.1 ::1]
	I0919 22:15:00.635079   19299 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0919 22:15:00.635220   19299 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-266998 localhost] and IPs [192.168.39.205 127.0.0.1 ::1]
	I0919 22:15:00.635310   19299 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0919 22:15:00.635395   19299 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0919 22:15:00.635452   19299 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0919 22:15:00.635537   19299 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 22:15:00.635626   19299 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 22:15:00.635717   19299 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0919 22:15:00.635818   19299 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 22:15:00.635880   19299 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 22:15:00.635947   19299 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 22:15:00.636034   19299 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 22:15:00.636117   19299 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 22:15:00.637379   19299 out.go:252]   - Booting up control plane ...
	I0919 22:15:00.637485   19299 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 22:15:00.637589   19299 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 22:15:00.637684   19299 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 22:15:00.637821   19299 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 22:15:00.637995   19299 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0919 22:15:00.638089   19299 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0919 22:15:00.638156   19299 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 22:15:00.638197   19299 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0919 22:15:00.638330   19299 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0919 22:15:00.638431   19299 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0919 22:15:00.638507   19299 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002400361s
	I0919 22:15:00.638604   19299 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0919 22:15:00.638715   19299 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.39.205:8443/livez
	I0919 22:15:00.638841   19299 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0919 22:15:00.638961   19299 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0919 22:15:00.639073   19299 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.091213229s
	I0919 22:15:00.639167   19299 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 3.684388781s
	I0919 22:15:00.639230   19299 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 5.502198999s
	I0919 22:15:00.639322   19299 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 22:15:00.639428   19299 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 22:15:00.639502   19299 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 22:15:00.639693   19299 kubeadm.go:310] [mark-control-plane] Marking the node addons-266998 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 22:15:00.639794   19299 kubeadm.go:310] [bootstrap-token] Using token: se7i9u.wzmegz9duaykb4al
	I0919 22:15:00.641261   19299 out.go:252]   - Configuring RBAC rules ...
	I0919 22:15:00.641372   19299 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 22:15:00.641484   19299 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 22:15:00.641635   19299 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 22:15:00.641838   19299 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 22:15:00.641934   19299 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 22:15:00.642006   19299 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 22:15:00.642109   19299 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 22:15:00.642175   19299 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0919 22:15:00.642235   19299 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0919 22:15:00.642245   19299 kubeadm.go:310] 
	I0919 22:15:00.642325   19299 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0919 22:15:00.642338   19299 kubeadm.go:310] 
	I0919 22:15:00.642441   19299 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0919 22:15:00.642452   19299 kubeadm.go:310] 
	I0919 22:15:00.642496   19299 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0919 22:15:00.642594   19299 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 22:15:00.642671   19299 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 22:15:00.642682   19299 kubeadm.go:310] 
	I0919 22:15:00.642776   19299 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0919 22:15:00.642785   19299 kubeadm.go:310] 
	I0919 22:15:00.642849   19299 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 22:15:00.642858   19299 kubeadm.go:310] 
	I0919 22:15:00.642929   19299 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0919 22:15:00.643004   19299 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 22:15:00.643063   19299 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 22:15:00.643070   19299 kubeadm.go:310] 
	I0919 22:15:00.643139   19299 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 22:15:00.643227   19299 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0919 22:15:00.643237   19299 kubeadm.go:310] 
	I0919 22:15:00.643334   19299 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token se7i9u.wzmegz9duaykb4al \
	I0919 22:15:00.643470   19299 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:764767ee84c2df0ad4ae14ef93303d4368042da5603c686ffbd3dbfd5d1666a5 \
	I0919 22:15:00.643502   19299 kubeadm.go:310] 	--control-plane 
	I0919 22:15:00.643512   19299 kubeadm.go:310] 
	I0919 22:15:00.643642   19299 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0919 22:15:00.643654   19299 kubeadm.go:310] 
	I0919 22:15:00.643777   19299 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token se7i9u.wzmegz9duaykb4al \
	I0919 22:15:00.643881   19299 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:764767ee84c2df0ad4ae14ef93303d4368042da5603c686ffbd3dbfd5d1666a5 
	I0919 22:15:00.643891   19299 cni.go:84] Creating CNI manager for ""
	I0919 22:15:00.643897   19299 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0919 22:15:00.645308   19299 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I0919 22:15:00.646606   19299 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0919 22:15:00.661615   19299 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0919 22:15:00.685514   19299 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 22:15:00.685578   19299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:15:00.685618   19299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-266998 minikube.k8s.io/updated_at=2025_09_19T22_15_00_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=addons-266998 minikube.k8s.io/primary=true
	I0919 22:15:00.738891   19299 ops.go:34] apiserver oom_adj: -16
	I0919 22:15:00.851971   19299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:15:01.352850   19299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:15:01.852527   19299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:15:02.352398   19299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:15:02.852800   19299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:15:03.352403   19299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:15:03.852023   19299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:15:04.352455   19299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:15:04.852272   19299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:15:05.352936   19299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:15:05.501454   19299 kubeadm.go:1105] duration metric: took 4.815938055s to wait for elevateKubeSystemPrivileges
	I0919 22:15:05.501493   19299 kubeadm.go:394] duration metric: took 18.090688013s to StartCluster
	I0919 22:15:05.501511   19299 settings.go:142] acquiring lock: {Name:mk9e6bfe60e4d22990b0b362d40b65315947b582 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:15:05.501644   19299 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21594-14764/kubeconfig
	I0919 22:15:05.502066   19299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14764/kubeconfig: {Name:mk29db95201211dec339ee278b6433541126d194 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:15:05.502268   19299 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.205 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 22:15:05.502279   19299 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 22:15:05.502340   19299 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0919 22:15:05.502487   19299 addons.go:69] Setting inspektor-gadget=true in profile "addons-266998"
	I0919 22:15:05.502507   19299 addons.go:69] Setting yakd=true in profile "addons-266998"
	I0919 22:15:05.502522   19299 addons.go:69] Setting gcp-auth=true in profile "addons-266998"
	I0919 22:15:05.502524   19299 config.go:182] Loaded profile config "addons-266998": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:15:05.502541   19299 addons.go:238] Setting addon yakd=true in "addons-266998"
	I0919 22:15:05.502551   19299 mustload.go:65] Loading cluster: addons-266998
	I0919 22:15:05.502579   19299 host.go:66] Checking if "addons-266998" exists ...
	I0919 22:15:05.502590   19299 addons.go:69] Setting metrics-server=true in profile "addons-266998"
	I0919 22:15:05.502562   19299 addons.go:69] Setting registry-creds=true in profile "addons-266998"
	I0919 22:15:05.502603   19299 addons.go:238] Setting addon metrics-server=true in "addons-266998"
	I0919 22:15:05.502597   19299 addons.go:69] Setting default-storageclass=true in profile "addons-266998"
	I0919 22:15:05.502626   19299 host.go:66] Checking if "addons-266998" exists ...
	I0919 22:15:05.502631   19299 addons.go:238] Setting addon registry-creds=true in "addons-266998"
	I0919 22:15:05.502612   19299 addons.go:69] Setting ingress=true in profile "addons-266998"
	I0919 22:15:05.502636   19299 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-266998"
	I0919 22:15:05.502662   19299 addons.go:238] Setting addon ingress=true in "addons-266998"
	I0919 22:15:05.502694   19299 host.go:66] Checking if "addons-266998" exists ...
	I0919 22:15:05.502720   19299 host.go:66] Checking if "addons-266998" exists ...
	I0919 22:15:05.502747   19299 config.go:182] Loaded profile config "addons-266998": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:15:05.503007   19299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 22:15:05.503039   19299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 22:15:05.503040   19299 addons.go:69] Setting ingress-dns=true in profile "addons-266998"
	I0919 22:15:05.503052   19299 addons.go:238] Setting addon ingress-dns=true in "addons-266998"
	I0919 22:15:05.503076   19299 host.go:66] Checking if "addons-266998" exists ...
	I0919 22:15:05.503101   19299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 22:15:05.503125   19299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 22:15:05.503133   19299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 22:15:05.503153   19299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 22:15:05.503161   19299 addons.go:69] Setting registry=true in profile "addons-266998"
	I0919 22:15:05.503177   19299 addons.go:238] Setting addon registry=true in "addons-266998"
	I0919 22:15:05.503187   19299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 22:15:05.503197   19299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 22:15:05.503202   19299 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-266998"
	I0919 22:15:05.503220   19299 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-266998"
	I0919 22:15:05.503235   19299 host.go:66] Checking if "addons-266998" exists ...
	I0919 22:15:05.503198   19299 addons.go:69] Setting cloud-spanner=true in profile "addons-266998"
	I0919 22:15:05.503241   19299 addons.go:69] Setting volcano=true in profile "addons-266998"
	I0919 22:15:05.503251   19299 addons.go:238] Setting addon cloud-spanner=true in "addons-266998"
	I0919 22:15:05.503252   19299 addons.go:238] Setting addon volcano=true in "addons-266998"
	I0919 22:15:05.503263   19299 addons.go:69] Setting storage-provisioner=true in profile "addons-266998"
	I0919 22:15:05.503273   19299 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-266998"
	I0919 22:15:05.503281   19299 addons.go:238] Setting addon storage-provisioner=true in "addons-266998"
	I0919 22:15:05.503265   19299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 22:15:05.503299   19299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 22:15:05.503324   19299 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-266998"
	I0919 22:15:05.503346   19299 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-266998"
	I0919 22:15:05.503365   19299 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-266998"
	I0919 22:15:05.502512   19299 addons.go:238] Setting addon inspektor-gadget=true in "addons-266998"
	I0919 22:15:05.503382   19299 addons.go:69] Setting volumesnapshots=true in profile "addons-266998"
	I0919 22:15:05.503394   19299 addons.go:238] Setting addon volumesnapshots=true in "addons-266998"
	I0919 22:15:05.502579   19299 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-266998"
	I0919 22:15:05.503449   19299 host.go:66] Checking if "addons-266998" exists ...
	I0919 22:15:05.503476   19299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 22:15:05.503481   19299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 22:15:05.503488   19299 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-266998"
	I0919 22:15:05.503500   19299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 22:15:05.503505   19299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 22:15:05.503516   19299 host.go:66] Checking if "addons-266998" exists ...
	I0919 22:15:05.503552   19299 host.go:66] Checking if "addons-266998" exists ...
	I0919 22:15:05.503909   19299 host.go:66] Checking if "addons-266998" exists ...
	I0919 22:15:05.503942   19299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 22:15:05.503964   19299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 22:15:05.504034   19299 host.go:66] Checking if "addons-266998" exists ...
	I0919 22:15:05.504188   19299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 22:15:05.504200   19299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 22:15:05.504213   19299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 22:15:05.504219   19299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 22:15:05.504562   19299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 22:15:05.504598   19299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 22:15:05.504656   19299 host.go:66] Checking if "addons-266998" exists ...
	I0919 22:15:05.504937   19299 host.go:66] Checking if "addons-266998" exists ...
	I0919 22:15:05.504945   19299 out.go:179] * Verifying Kubernetes components...
	I0919 22:15:05.505266   19299 host.go:66] Checking if "addons-266998" exists ...
	I0919 22:15:05.506612   19299 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:15:05.511097   19299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 22:15:05.511127   19299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 22:15:05.514366   19299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 22:15:05.514400   19299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 22:15:05.515085   19299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 22:15:05.515242   19299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 22:15:05.518003   19299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 22:15:05.518035   19299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 22:15:05.518572   19299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 22:15:05.518595   19299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 22:15:05.518776   19299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 22:15:05.518812   19299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 22:15:05.534839   19299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37037
	I0919 22:15:05.534850   19299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43953
	I0919 22:15:05.541834   19299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43351
	I0919 22:15:05.542037   19299 main.go:141] libmachine: () Calling .GetVersion
	I0919 22:15:05.542969   19299 main.go:141] libmachine: Using API Version  1
	I0919 22:15:05.543000   19299 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 22:15:05.543381   19299 main.go:141] libmachine: () Calling .GetMachineName
	I0919 22:15:05.543967   19299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 22:15:05.544007   19299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 22:15:05.546404   19299 main.go:141] libmachine: () Calling .GetVersion
	I0919 22:15:05.547041   19299 main.go:141] libmachine: Using API Version  1
	I0919 22:15:05.547059   19299 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 22:15:05.549878   19299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37267
	I0919 22:15:05.550037   19299 main.go:141] libmachine: () Calling .GetMachineName
	I0919 22:15:05.550094   19299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44769
	I0919 22:15:05.550493   19299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46159
	I0919 22:15:05.551690   19299 main.go:141] libmachine: () Calling .GetVersion
	I0919 22:15:05.551742   19299 main.go:141] libmachine: () Calling .GetVersion
	I0919 22:15:05.551810   19299 main.go:141] libmachine: () Calling .GetVersion
	I0919 22:15:05.551913   19299 main.go:141] libmachine: (addons-266998) Calling .GetState
	I0919 22:15:05.552153   19299 main.go:141] libmachine: Using API Version  1
	I0919 22:15:05.552170   19299 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 22:15:05.552239   19299 main.go:141] libmachine: Using API Version  1
	I0919 22:15:05.552253   19299 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 22:15:05.552560   19299 main.go:141] libmachine: () Calling .GetMachineName
	I0919 22:15:05.552669   19299 main.go:141] libmachine: (addons-266998) Calling .GetState
	I0919 22:15:05.553077   19299 main.go:141] libmachine: () Calling .GetMachineName
	I0919 22:15:05.553158   19299 main.go:141] libmachine: () Calling .GetVersion
	I0919 22:15:05.553572   19299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43049
	I0919 22:15:05.553636   19299 main.go:141] libmachine: Using API Version  1
	I0919 22:15:05.553650   19299 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 22:15:05.554010   19299 main.go:141] libmachine: () Calling .GetMachineName
	I0919 22:15:05.554135   19299 main.go:141] libmachine: Using API Version  1
	I0919 22:15:05.554148   19299 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 22:15:05.555700   19299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 22:15:05.562037   19299 main.go:141] libmachine: () Calling .GetVersion
	I0919 22:15:05.562373   19299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36917
	I0919 22:15:05.562565   19299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40617
	I0919 22:15:05.563151   19299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35683
	I0919 22:15:05.564096   19299 main.go:141] libmachine: Using API Version  1
	I0919 22:15:05.564120   19299 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 22:15:05.564511   19299 host.go:66] Checking if "addons-266998" exists ...
	I0919 22:15:05.565061   19299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 22:15:05.565114   19299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 22:15:05.565335   19299 main.go:141] libmachine: () Calling .GetMachineName
	I0919 22:15:05.565434   19299 main.go:141] libmachine: () Calling .GetVersion
	I0919 22:15:05.565552   19299 main.go:141] libmachine: () Calling .GetVersion
	I0919 22:15:05.566068   19299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 22:15:05.566106   19299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 22:15:05.566594   19299 addons.go:238] Setting addon default-storageclass=true in "addons-266998"
	I0919 22:15:05.566642   19299 host.go:66] Checking if "addons-266998" exists ...
	I0919 22:15:05.567090   19299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 22:15:05.567138   19299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 22:15:05.567438   19299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36609
	I0919 22:15:05.567518   19299 main.go:141] libmachine: Using API Version  1
	I0919 22:15:05.567533   19299 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 22:15:05.567659   19299 main.go:141] libmachine: () Calling .GetMachineName
	I0919 22:15:05.567702   19299 main.go:141] libmachine: Using API Version  1
	I0919 22:15:05.567716   19299 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 22:15:05.567805   19299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40643
	I0919 22:15:05.568007   19299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39257
	I0919 22:15:05.568205   19299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 22:15:05.568238   19299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 22:15:05.568711   19299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 22:15:05.568798   19299 main.go:141] libmachine: () Calling .GetVersion
	I0919 22:15:05.568989   19299 main.go:141] libmachine: () Calling .GetVersion
	I0919 22:15:05.569370   19299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 22:15:05.569390   19299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 22:15:05.569644   19299 main.go:141] libmachine: () Calling .GetMachineName
	I0919 22:15:05.569655   19299 main.go:141] libmachine: Using API Version  1
	I0919 22:15:05.569765   19299 main.go:141] libmachine: () Calling .GetVersion
	I0919 22:15:05.570380   19299 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 22:15:05.571426   19299 main.go:141] libmachine: Using API Version  1
	I0919 22:15:05.571453   19299 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 22:15:05.572090   19299 main.go:141] libmachine: () Calling .GetMachineName
	I0919 22:15:05.572443   19299 main.go:141] libmachine: () Calling .GetMachineName
	I0919 22:15:05.572622   19299 main.go:141] libmachine: () Calling .GetMachineName
	I0919 22:15:05.572972   19299 main.go:141] libmachine: Using API Version  1
	I0919 22:15:05.572986   19299 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 22:15:05.573390   19299 main.go:141] libmachine: () Calling .GetMachineName
	I0919 22:15:05.573394   19299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 22:15:05.573420   19299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 22:15:05.573897   19299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 22:15:05.573931   19299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 22:15:05.575294   19299 main.go:141] libmachine: () Calling .GetVersion
	I0919 22:15:05.575851   19299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 22:15:05.576586   19299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 22:15:05.575906   19299 main.go:141] libmachine: (addons-266998) Calling .GetState
	I0919 22:15:05.576367   19299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 22:15:05.577342   19299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 22:15:05.576399   19299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34645
	I0919 22:15:05.576434   19299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34013
	I0919 22:15:05.586895   19299 main.go:141] libmachine: Using API Version  1
	I0919 22:15:05.586924   19299 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 22:15:05.587020   19299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32899
	I0919 22:15:05.587175   19299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46497
	I0919 22:15:05.587491   19299 main.go:141] libmachine: () Calling .GetVersion
	I0919 22:15:05.590041   19299 main.go:141] libmachine: Using API Version  1
	I0919 22:15:05.590062   19299 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 22:15:05.590137   19299 main.go:141] libmachine: () Calling .GetVersion
	I0919 22:15:05.591086   19299 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-266998"
	I0919 22:15:05.591124   19299 host.go:66] Checking if "addons-266998" exists ...
	I0919 22:15:05.591496   19299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 22:15:05.591533   19299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 22:15:05.591773   19299 main.go:141] libmachine: () Calling .GetVersion
	I0919 22:15:05.591774   19299 main.go:141] libmachine: () Calling .GetMachineName
	I0919 22:15:05.592207   19299 main.go:141] libmachine: () Calling .GetMachineName
	I0919 22:15:05.592350   19299 main.go:141] libmachine: Using API Version  1
	I0919 22:15:05.592364   19299 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 22:15:05.592422   19299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41645
	I0919 22:15:05.592613   19299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 22:15:05.592644   19299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 22:15:05.592907   19299 main.go:141] libmachine: () Calling .GetMachineName
	I0919 22:15:05.592972   19299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38591
	I0919 22:15:05.592991   19299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42567
	I0919 22:15:05.593376   19299 main.go:141] libmachine: Using API Version  1
	I0919 22:15:05.593390   19299 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 22:15:05.593455   19299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 22:15:05.593486   19299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 22:15:05.593822   19299 main.go:141] libmachine: () Calling .GetMachineName
	I0919 22:15:05.594159   19299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 22:15:05.594876   19299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 22:15:05.596053   19299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 22:15:05.596090   19299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 22:15:05.597317   19299 main.go:141] libmachine: () Calling .GetVersion
	I0919 22:15:05.597472   19299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44569
	I0919 22:15:05.597930   19299 main.go:141] libmachine: Using API Version  1
	I0919 22:15:05.597945   19299 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 22:15:05.598003   19299 main.go:141] libmachine: () Calling .GetVersion
	I0919 22:15:05.598397   19299 main.go:141] libmachine: Using API Version  1
	I0919 22:15:05.598410   19299 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 22:15:05.598769   19299 main.go:141] libmachine: () Calling .GetMachineName
	I0919 22:15:05.598833   19299 main.go:141] libmachine: () Calling .GetVersion
	I0919 22:15:05.599297   19299 main.go:141] libmachine: () Calling .GetVersion
	I0919 22:15:05.599360   19299 main.go:141] libmachine: Using API Version  1
	I0919 22:15:05.599376   19299 main.go:141] libmachine: (addons-266998) Calling .DriverName
	I0919 22:15:05.599376   19299 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 22:15:05.599949   19299 main.go:141] libmachine: Using API Version  1
	I0919 22:15:05.599963   19299 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 22:15:05.600354   19299 main.go:141] libmachine: () Calling .GetMachineName
	I0919 22:15:05.600577   19299 main.go:141] libmachine: () Calling .GetMachineName
	I0919 22:15:05.600927   19299 main.go:141] libmachine: (addons-266998) Calling .GetState
	I0919 22:15:05.601006   19299 main.go:141] libmachine: () Calling .GetVersion
	I0919 22:15:05.601287   19299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 22:15:05.601319   19299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 22:15:05.602220   19299 main.go:141] libmachine: Using API Version  1
	I0919 22:15:05.602244   19299 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 22:15:05.602297   19299 main.go:141] libmachine: () Calling .GetMachineName
	I0919 22:15:05.604216   19299 main.go:141] libmachine: (addons-266998) Calling .GetState
	I0919 22:15:05.604364   19299 main.go:141] libmachine: () Calling .GetMachineName
	I0919 22:15:05.606472   19299 main.go:141] libmachine: (addons-266998) Calling .DriverName
	I0919 22:15:05.607043   19299 main.go:141] libmachine: (addons-266998) Calling .DriverName
	I0919 22:15:05.609371   19299 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I0919 22:15:05.610643   19299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42471
	I0919 22:15:05.610950   19299 main.go:141] libmachine: (addons-266998) Calling .GetState
	I0919 22:15:05.612339   19299 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I0919 22:15:05.612827   19299 main.go:141] libmachine: () Calling .GetVersion
	I0919 22:15:05.613534   19299 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0919 22:15:05.613555   19299 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0919 22:15:05.613575   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHHostname
	I0919 22:15:05.615195   19299 main.go:141] libmachine: (addons-266998) Calling .DriverName
	I0919 22:15:05.615317   19299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33081
	I0919 22:15:05.615764   19299 main.go:141] libmachine: Using API Version  1
	I0919 22:15:05.615779   19299 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 22:15:05.616182   19299 main.go:141] libmachine: () Calling .GetMachineName
	I0919 22:15:05.616254   19299 main.go:141] libmachine: () Calling .GetVersion
	I0919 22:15:05.616445   19299 main.go:141] libmachine: (addons-266998) Calling .GetState
	I0919 22:15:05.616717   19299 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0919 22:15:05.616758   19299 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I0919 22:15:05.616781   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHHostname
	I0919 22:15:05.617568   19299 main.go:141] libmachine: Using API Version  1
	I0919 22:15:05.617586   19299 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 22:15:05.617923   19299 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0919 22:15:05.621008   19299 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0919 22:15:05.621032   19299 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0919 22:15:05.621053   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHHostname
	I0919 22:15:05.625856   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:15:05.625893   19299 main.go:141] libmachine: (addons-266998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:32:d6", ip: ""} in network mk-addons-266998: {Iface:virbr1 ExpiryTime:2025-09-19 23:14:35 +0000 UTC Type:0 Mac:52:54:00:25:32:d6 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-266998 Clientid:01:52:54:00:25:32:d6}
	I0919 22:15:05.625907   19299 main.go:141] libmachine: (addons-266998) Calling .DriverName
	I0919 22:15:05.625856   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:15:05.625863   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHPort
	I0919 22:15:05.625943   19299 main.go:141] libmachine: () Calling .GetMachineName
	I0919 22:15:05.625909   19299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36523
	I0919 22:15:05.625910   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined IP address 192.168.39.205 and MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:15:05.626119   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHKeyPath
	I0919 22:15:05.626436   19299 main.go:141] libmachine: (addons-266998) Calling .GetState
	I0919 22:15:05.626541   19299 main.go:141] libmachine: () Calling .GetVersion
	I0919 22:15:05.626629   19299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45771
	I0919 22:15:05.627044   19299 main.go:141] libmachine: Using API Version  1
	I0919 22:15:05.627067   19299 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 22:15:05.627205   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHUsername
	I0919 22:15:05.627818   19299 main.go:141] libmachine: () Calling .GetMachineName
	I0919 22:15:05.628040   19299 main.go:141] libmachine: (addons-266998) Calling .GetState
	I0919 22:15:05.628146   19299 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21594-14764/.minikube/machines/addons-266998/id_rsa Username:docker}
	I0919 22:15:05.628649   19299 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I0919 22:15:05.628778   19299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41959
	I0919 22:15:05.629914   19299 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0919 22:15:05.629935   19299 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0919 22:15:05.629955   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHHostname
	I0919 22:15:05.630341   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHPort
	I0919 22:15:05.630350   19299 main.go:141] libmachine: (addons-266998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:32:d6", ip: ""} in network mk-addons-266998: {Iface:virbr1 ExpiryTime:2025-09-19 23:14:35 +0000 UTC Type:0 Mac:52:54:00:25:32:d6 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-266998 Clientid:01:52:54:00:25:32:d6}
	I0919 22:15:05.630368   19299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40663
	I0919 22:15:05.630372   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined IP address 192.168.39.205 and MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:15:05.630544   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHKeyPath
	I0919 22:15:05.630670   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHUsername
	I0919 22:15:05.630813   19299 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21594-14764/.minikube/machines/addons-266998/id_rsa Username:docker}
	I0919 22:15:05.631683   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:15:05.632097   19299 main.go:141] libmachine: () Calling .GetVersion
	I0919 22:15:05.632202   19299 main.go:141] libmachine: () Calling .GetVersion
	I0919 22:15:05.633319   19299 main.go:141] libmachine: Using API Version  1
	I0919 22:15:05.633337   19299 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 22:15:05.633802   19299 main.go:141] libmachine: Using API Version  1
	I0919 22:15:05.633867   19299 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 22:15:05.633918   19299 main.go:141] libmachine: () Calling .GetMachineName
	I0919 22:15:05.633870   19299 main.go:141] libmachine: (addons-266998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:32:d6", ip: ""} in network mk-addons-266998: {Iface:virbr1 ExpiryTime:2025-09-19 23:14:35 +0000 UTC Type:0 Mac:52:54:00:25:32:d6 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-266998 Clientid:01:52:54:00:25:32:d6}
	I0919 22:15:05.634079   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined IP address 192.168.39.205 and MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:15:05.633840   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHPort
	I0919 22:15:05.634269   19299 main.go:141] libmachine: () Calling .GetMachineName
	I0919 22:15:05.634405   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHKeyPath
	I0919 22:15:05.634455   19299 main.go:141] libmachine: (addons-266998) Calling .GetState
	I0919 22:15:05.634539   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHUsername
	I0919 22:15:05.634651   19299 main.go:141] libmachine: (addons-266998) Calling .GetState
	I0919 22:15:05.634716   19299 main.go:141] libmachine: (addons-266998) Calling .DriverName
	I0919 22:15:05.634879   19299 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21594-14764/.minikube/machines/addons-266998/id_rsa Username:docker}
	I0919 22:15:05.635431   19299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44513
	I0919 22:15:05.635612   19299 main.go:141] libmachine: () Calling .GetVersion
	I0919 22:15:05.636227   19299 main.go:141] libmachine: Using API Version  1
	I0919 22:15:05.636247   19299 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 22:15:05.636328   19299 main.go:141] libmachine: (addons-266998) Calling .DriverName
	I0919 22:15:05.636974   19299 main.go:141] libmachine: () Calling .GetMachineName
	I0919 22:15:05.637043   19299 main.go:141] libmachine: () Calling .GetVersion
	I0919 22:15:05.637254   19299 main.go:141] libmachine: (addons-266998) Calling .GetState
	I0919 22:15:05.637657   19299 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0919 22:15:05.637778   19299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40983
	I0919 22:15:05.638360   19299 main.go:141] libmachine: Using API Version  1
	I0919 22:15:05.638378   19299 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 22:15:05.639268   19299 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0919 22:15:05.639287   19299 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0919 22:15:05.639290   19299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36239
	I0919 22:15:05.639307   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHHostname
	I0919 22:15:05.639454   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:15:05.639487   19299 main.go:141] libmachine: () Calling .GetVersion
	I0919 22:15:05.639614   19299 main.go:141] libmachine: (addons-266998) Calling .DriverName
	I0919 22:15:05.639613   19299 main.go:141] libmachine: (addons-266998) Calling .DriverName
	I0919 22:15:05.640233   19299 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I0919 22:15:05.640377   19299 main.go:141] libmachine: () Calling .GetMachineName
	I0919 22:15:05.640435   19299 main.go:141] libmachine: (addons-266998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:32:d6", ip: ""} in network mk-addons-266998: {Iface:virbr1 ExpiryTime:2025-09-19 23:14:35 +0000 UTC Type:0 Mac:52:54:00:25:32:d6 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-266998 Clientid:01:52:54:00:25:32:d6}
	I0919 22:15:05.640487   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined IP address 192.168.39.205 and MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:15:05.640634   19299 main.go:141] libmachine: () Calling .GetVersion
	I0919 22:15:05.640673   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHPort
	I0919 22:15:05.640813   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHKeyPath
	I0919 22:15:05.640915   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHUsername
	I0919 22:15:05.641061   19299 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21594-14764/.minikube/machines/addons-266998/id_rsa Username:docker}
	I0919 22:15:05.641220   19299 main.go:141] libmachine: Using API Version  1
	I0919 22:15:05.641220   19299 main.go:141] libmachine: Using API Version  1
	I0919 22:15:05.641233   19299 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 22:15:05.641234   19299 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 22:15:05.641328   19299 main.go:141] libmachine: (addons-266998) Calling .GetState
	I0919 22:15:05.641623   19299 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.3
	I0919 22:15:05.641708   19299 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0919 22:15:05.641747   19299 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0919 22:15:05.641764   19299 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I0919 22:15:05.641780   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHHostname
	I0919 22:15:05.642123   19299 main.go:141] libmachine: (addons-266998) Calling .DriverName
	I0919 22:15:05.642323   19299 main.go:141] libmachine: () Calling .GetMachineName
	I0919 22:15:05.642864   19299 main.go:141] libmachine: (addons-266998) Calling .GetState
	I0919 22:15:05.642923   19299 main.go:141] libmachine: () Calling .GetMachineName
	I0919 22:15:05.643100   19299 main.go:141] libmachine: (addons-266998) Calling .GetState
	I0919 22:15:05.643625   19299 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0919 22:15:05.643647   19299 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0919 22:15:05.643663   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHHostname
	I0919 22:15:05.643629   19299 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0919 22:15:05.643747   19299 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0919 22:15:05.643763   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHHostname
	I0919 22:15:05.644062   19299 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0919 22:15:05.645280   19299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35551
	I0919 22:15:05.645395   19299 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0919 22:15:05.645476   19299 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0919 22:15:05.645507   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHHostname
	I0919 22:15:05.645923   19299 main.go:141] libmachine: (addons-266998) Calling .DriverName
	I0919 22:15:05.646062   19299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42465
	I0919 22:15:05.646288   19299 main.go:141] libmachine: () Calling .GetVersion
	I0919 22:15:05.646823   19299 main.go:141] libmachine: () Calling .GetVersion
	I0919 22:15:05.647301   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:15:05.647366   19299 main.go:141] libmachine: (addons-266998) Calling .DriverName
	I0919 22:15:05.647538   19299 main.go:141] libmachine: Using API Version  1
	I0919 22:15:05.647566   19299 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 22:15:05.647642   19299 main.go:141] libmachine: (addons-266998) Calling .DriverName
	I0919 22:15:05.647841   19299 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I0919 22:15:05.648404   19299 main.go:141] libmachine: () Calling .GetMachineName
	I0919 22:15:05.648430   19299 main.go:141] libmachine: (addons-266998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:32:d6", ip: ""} in network mk-addons-266998: {Iface:virbr1 ExpiryTime:2025-09-19 23:14:35 +0000 UTC Type:0 Mac:52:54:00:25:32:d6 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-266998 Clientid:01:52:54:00:25:32:d6}
	I0919 22:15:05.648453   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined IP address 192.168.39.205 and MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:15:05.648515   19299 main.go:141] libmachine: Using API Version  1
	I0919 22:15:05.648614   19299 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 22:15:05.648639   19299 main.go:141] libmachine: (addons-266998) Calling .GetState
	I0919 22:15:05.648787   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHPort
	I0919 22:15:05.649192   19299 main.go:141] libmachine: () Calling .GetMachineName
	I0919 22:15:05.649271   19299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46123
	I0919 22:15:05.649774   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHKeyPath
	I0919 22:15:05.649844   19299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 22:15:05.649885   19299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 22:15:05.649890   19299 main.go:141] libmachine: () Calling .GetVersion
	I0919 22:15:05.650159   19299 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0919 22:15:05.650173   19299 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 22:15:05.650292   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHUsername
	I0919 22:15:05.650381   19299 main.go:141] libmachine: Using API Version  1
	I0919 22:15:05.650401   19299 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 22:15:05.650770   19299 main.go:141] libmachine: () Calling .GetMachineName
	I0919 22:15:05.650574   19299 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21594-14764/.minikube/machines/addons-266998/id_rsa Username:docker}
	I0919 22:15:05.651554   19299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 22:15:05.651655   19299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 22:15:05.651804   19299 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 22:15:05.651817   19299 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 22:15:05.651842   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHHostname
	I0919 22:15:05.651607   19299 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0919 22:15:05.652075   19299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33515
	I0919 22:15:05.652765   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:15:05.653044   19299 main.go:141] libmachine: () Calling .GetVersion
	I0919 22:15:05.653438   19299 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0919 22:15:05.654185   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:15:05.654333   19299 main.go:141] libmachine: (addons-266998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:32:d6", ip: ""} in network mk-addons-266998: {Iface:virbr1 ExpiryTime:2025-09-19 23:14:35 +0000 UTC Type:0 Mac:52:54:00:25:32:d6 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-266998 Clientid:01:52:54:00:25:32:d6}
	I0919 22:15:05.654361   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined IP address 192.168.39.205 and MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:15:05.654602   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHPort
	I0919 22:15:05.654806   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHKeyPath
	I0919 22:15:05.655047   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHUsername
	I0919 22:15:05.655193   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:15:05.655234   19299 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0919 22:15:05.655227   19299 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21594-14764/.minikube/machines/addons-266998/id_rsa Username:docker}
	I0919 22:15:05.655463   19299 main.go:141] libmachine: (addons-266998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:32:d6", ip: ""} in network mk-addons-266998: {Iface:virbr1 ExpiryTime:2025-09-19 23:14:35 +0000 UTC Type:0 Mac:52:54:00:25:32:d6 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-266998 Clientid:01:52:54:00:25:32:d6}
	I0919 22:15:05.655484   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined IP address 192.168.39.205 and MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:15:05.655612   19299 main.go:141] libmachine: Using API Version  1
	I0919 22:15:05.655622   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHPort
	I0919 22:15:05.655624   19299 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 22:15:05.655951   19299 main.go:141] libmachine: (addons-266998) Calling .DriverName
	I0919 22:15:05.656025   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHKeyPath
	I0919 22:15:05.656436   19299 main.go:141] libmachine: (addons-266998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:32:d6", ip: ""} in network mk-addons-266998: {Iface:virbr1 ExpiryTime:2025-09-19 23:14:35 +0000 UTC Type:0 Mac:52:54:00:25:32:d6 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-266998 Clientid:01:52:54:00:25:32:d6}
	I0919 22:15:05.656459   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined IP address 192.168.39.205 and MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:15:05.656554   19299 main.go:141] libmachine: Making call to close driver server
	I0919 22:15:05.656567   19299 main.go:141] libmachine: (addons-266998) Calling .Close
	I0919 22:15:05.656636   19299 main.go:141] libmachine: () Calling .GetMachineName
	I0919 22:15:05.656714   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHUsername
	I0919 22:15:05.656784   19299 main.go:141] libmachine: Successfully made call to close driver server
	I0919 22:15:05.656911   19299 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 22:15:05.656926   19299 main.go:141] libmachine: Making call to close driver server
	I0919 22:15:05.656933   19299 main.go:141] libmachine: (addons-266998) Calling .Close
	I0919 22:15:05.657016   19299 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0919 22:15:05.657389   19299 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21594-14764/.minikube/machines/addons-266998/id_rsa Username:docker}
	I0919 22:15:05.657866   19299 main.go:141] libmachine: (addons-266998) DBG | Closing plugin on server side
	I0919 22:15:05.657893   19299 main.go:141] libmachine: Successfully made call to close driver server
	I0919 22:15:05.657901   19299 main.go:141] libmachine: Making call to close connection to plugin binary
	W0919 22:15:05.657968   19299 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0919 22:15:05.658274   19299 main.go:141] libmachine: (addons-266998) Calling .GetState
	I0919 22:15:05.658398   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:15:05.658421   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHPort
	I0919 22:15:05.658722   19299 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0919 22:15:05.658762   19299 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0919 22:15:05.658779   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHHostname
	I0919 22:15:05.659279   19299 main.go:141] libmachine: (addons-266998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:32:d6", ip: ""} in network mk-addons-266998: {Iface:virbr1 ExpiryTime:2025-09-19 23:14:35 +0000 UTC Type:0 Mac:52:54:00:25:32:d6 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-266998 Clientid:01:52:54:00:25:32:d6}
	I0919 22:15:05.659310   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHKeyPath
	I0919 22:15:05.659468   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHUsername
	I0919 22:15:05.659816   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:15:05.659843   19299 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21594-14764/.minikube/machines/addons-266998/id_rsa Username:docker}
	I0919 22:15:05.660189   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined IP address 192.168.39.205 and MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:15:05.660926   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHPort
	I0919 22:15:05.661118   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHKeyPath
	I0919 22:15:05.661281   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHUsername
	I0919 22:15:05.661408   19299 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21594-14764/.minikube/machines/addons-266998/id_rsa Username:docker}
	I0919 22:15:05.661613   19299 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0919 22:15:05.661720   19299 main.go:141] libmachine: (addons-266998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:32:d6", ip: ""} in network mk-addons-266998: {Iface:virbr1 ExpiryTime:2025-09-19 23:14:35 +0000 UTC Type:0 Mac:52:54:00:25:32:d6 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-266998 Clientid:01:52:54:00:25:32:d6}
	I0919 22:15:05.661758   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined IP address 192.168.39.205 and MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:15:05.662127   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHPort
	I0919 22:15:05.662476   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHKeyPath
	I0919 22:15:05.662744   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHUsername
	I0919 22:15:05.662908   19299 main.go:141] libmachine: (addons-266998) Calling .DriverName
	I0919 22:15:05.663045   19299 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21594-14764/.minikube/machines/addons-266998/id_rsa Username:docker}
	I0919 22:15:05.664435   19299 out.go:179]   - Using image docker.io/registry:3.0.0
	I0919 22:15:05.664468   19299 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0919 22:15:05.665059   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:15:05.665534   19299 main.go:141] libmachine: (addons-266998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:32:d6", ip: ""} in network mk-addons-266998: {Iface:virbr1 ExpiryTime:2025-09-19 23:14:35 +0000 UTC Type:0 Mac:52:54:00:25:32:d6 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-266998 Clientid:01:52:54:00:25:32:d6}
	I0919 22:15:05.665562   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined IP address 192.168.39.205 and MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:15:05.665834   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHPort
	I0919 22:15:05.666041   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHKeyPath
	I0919 22:15:05.666260   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHUsername
	I0919 22:15:05.666419   19299 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21594-14764/.minikube/machines/addons-266998/id_rsa Username:docker}
	I0919 22:15:05.666948   19299 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0919 22:15:05.666991   19299 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0919 22:15:05.668530   19299 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0919 22:15:05.668550   19299 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0919 22:15:05.668568   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHHostname
	I0919 22:15:05.668572   19299 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0919 22:15:05.669559   19299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33741
	I0919 22:15:05.669807   19299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36889
	I0919 22:15:05.669970   19299 main.go:141] libmachine: () Calling .GetVersion
	I0919 22:15:05.670374   19299 main.go:141] libmachine: () Calling .GetVersion
	I0919 22:15:05.670454   19299 main.go:141] libmachine: Using API Version  1
	I0919 22:15:05.670476   19299 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 22:15:05.670856   19299 main.go:141] libmachine: Using API Version  1
	I0919 22:15:05.670872   19299 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 22:15:05.670901   19299 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0919 22:15:05.670899   19299 main.go:141] libmachine: () Calling .GetMachineName
	I0919 22:15:05.671202   19299 main.go:141] libmachine: (addons-266998) Calling .GetState
	I0919 22:15:05.671253   19299 main.go:141] libmachine: () Calling .GetMachineName
	I0919 22:15:05.671415   19299 main.go:141] libmachine: (addons-266998) Calling .GetState
	I0919 22:15:05.672024   19299 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0919 22:15:05.672044   19299 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0919 22:15:05.672062   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHHostname
	I0919 22:15:05.672929   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:15:05.673566   19299 main.go:141] libmachine: (addons-266998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:32:d6", ip: ""} in network mk-addons-266998: {Iface:virbr1 ExpiryTime:2025-09-19 23:14:35 +0000 UTC Type:0 Mac:52:54:00:25:32:d6 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-266998 Clientid:01:52:54:00:25:32:d6}
	I0919 22:15:05.673601   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined IP address 192.168.39.205 and MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:15:05.673909   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHPort
	I0919 22:15:05.674077   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHKeyPath
	I0919 22:15:05.674170   19299 main.go:141] libmachine: (addons-266998) Calling .DriverName
	I0919 22:15:05.674217   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHUsername
	I0919 22:15:05.674380   19299 main.go:141] libmachine: (addons-266998) Calling .DriverName
	I0919 22:15:05.674432   19299 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 22:15:05.674431   19299 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21594-14764/.minikube/machines/addons-266998/id_rsa Username:docker}
	I0919 22:15:05.674452   19299 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 22:15:05.674468   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHHostname
	I0919 22:15:05.675961   19299 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0919 22:15:05.676975   19299 out.go:179]   - Using image docker.io/busybox:stable
	I0919 22:15:05.677447   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:15:05.677905   19299 main.go:141] libmachine: (addons-266998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:32:d6", ip: ""} in network mk-addons-266998: {Iface:virbr1 ExpiryTime:2025-09-19 23:14:35 +0000 UTC Type:0 Mac:52:54:00:25:32:d6 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-266998 Clientid:01:52:54:00:25:32:d6}
	I0919 22:15:05.677933   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined IP address 192.168.39.205 and MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:15:05.678111   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHPort
	I0919 22:15:05.678260   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHKeyPath
	I0919 22:15:05.678414   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHUsername
	I0919 22:15:05.678437   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:15:05.678568   19299 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21594-14764/.minikube/machines/addons-266998/id_rsa Username:docker}
	I0919 22:15:05.678618   19299 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0919 22:15:05.678636   19299 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0919 22:15:05.678655   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHHostname
	I0919 22:15:05.679018   19299 main.go:141] libmachine: (addons-266998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:32:d6", ip: ""} in network mk-addons-266998: {Iface:virbr1 ExpiryTime:2025-09-19 23:14:35 +0000 UTC Type:0 Mac:52:54:00:25:32:d6 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-266998 Clientid:01:52:54:00:25:32:d6}
	I0919 22:15:05.679039   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined IP address 192.168.39.205 and MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:15:05.679407   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHPort
	I0919 22:15:05.679548   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHKeyPath
	I0919 22:15:05.679713   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHUsername
	I0919 22:15:05.679877   19299 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21594-14764/.minikube/machines/addons-266998/id_rsa Username:docker}
	I0919 22:15:05.681628   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:15:05.682001   19299 main.go:141] libmachine: (addons-266998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:32:d6", ip: ""} in network mk-addons-266998: {Iface:virbr1 ExpiryTime:2025-09-19 23:14:35 +0000 UTC Type:0 Mac:52:54:00:25:32:d6 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-266998 Clientid:01:52:54:00:25:32:d6}
	I0919 22:15:05.682029   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined IP address 192.168.39.205 and MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:15:05.682173   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHPort
	I0919 22:15:05.682347   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHKeyPath
	I0919 22:15:05.682491   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHUsername
	I0919 22:15:05.682611   19299 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21594-14764/.minikube/machines/addons-266998/id_rsa Username:docker}
	W0919 22:15:06.033151   19299 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:57790->192.168.39.205:22: read: connection reset by peer
	I0919 22:15:06.033197   19299 retry.go:31] will retry after 233.17718ms: ssh: handshake failed: read tcp 192.168.39.1:57790->192.168.39.205:22: read: connection reset by peer
	W0919 22:15:06.033155   19299 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:57794->192.168.39.205:22: read: connection reset by peer
	I0919 22:15:06.033212   19299 retry.go:31] will retry after 147.952664ms: ssh: handshake failed: read tcp 192.168.39.1:57794->192.168.39.205:22: read: connection reset by peer
	I0919 22:15:06.463563   19299 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0919 22:15:06.463584   19299 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0919 22:15:06.545892   19299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0919 22:15:06.566799   19299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0919 22:15:06.590698   19299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0919 22:15:06.722984   19299 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.220677411s)
	I0919 22:15:06.723027   19299 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.216387753s)
	I0919 22:15:06.723103   19299 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:15:06.723161   19299 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0919 22:15:06.729552   19299 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0919 22:15:06.729571   19299 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I0919 22:15:06.740672   19299 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0919 22:15:06.740707   19299 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0919 22:15:06.753536   19299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 22:15:06.753890   19299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0919 22:15:06.792184   19299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0919 22:15:06.814354   19299 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0919 22:15:06.814385   19299 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0919 22:15:06.826639   19299 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0919 22:15:06.826672   19299 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0919 22:15:07.012498   19299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0919 22:15:07.014340   19299 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0919 22:15:07.014367   19299 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0919 22:15:07.066208   19299 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0919 22:15:07.066236   19299 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0919 22:15:07.337152   19299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0919 22:15:07.347432   19299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 22:15:07.390145   19299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0919 22:15:07.497145   19299 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0919 22:15:07.497175   19299 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0919 22:15:07.502297   19299 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0919 22:15:07.502324   19299 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0919 22:15:07.534848   19299 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0919 22:15:07.534874   19299 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0919 22:15:07.554329   19299 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0919 22:15:07.554359   19299 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0919 22:15:07.583907   19299 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 22:15:07.583934   19299 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0919 22:15:07.735496   19299 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0919 22:15:07.735525   19299 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0919 22:15:07.744413   19299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0919 22:15:07.877371   19299 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0919 22:15:07.877396   19299 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0919 22:15:07.881931   19299 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0919 22:15:07.881959   19299 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0919 22:15:07.906366   19299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 22:15:08.036679   19299 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0919 22:15:08.036705   19299 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0919 22:15:08.272223   19299 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0919 22:15:08.272263   19299 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0919 22:15:08.330844   19299 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0919 22:15:08.330873   19299 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0919 22:15:08.583773   19299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0919 22:15:08.716974   19299 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0919 22:15:08.717001   19299 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0919 22:15:08.718497   19299 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0919 22:15:08.718518   19299 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0919 22:15:09.032434   19299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0919 22:15:09.172262   19299 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0919 22:15:09.172286   19299 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0919 22:15:09.703647   19299 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0919 22:15:09.703676   19299 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0919 22:15:09.938582   19299 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0919 22:15:09.938607   19299 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0919 22:15:10.158993   19299 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0919 22:15:10.159017   19299 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0919 22:15:10.400494   19299 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0919 22:15:10.400521   19299 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0919 22:15:10.673495   19299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0919 22:15:11.845531   19299 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.299597662s)
	I0919 22:15:11.845583   19299 main.go:141] libmachine: Making call to close driver server
	I0919 22:15:11.845594   19299 main.go:141] libmachine: (addons-266998) Calling .Close
	I0919 22:15:11.845629   19299 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (5.278791743s)
	I0919 22:15:11.845672   19299 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.254947688s)
	I0919 22:15:11.845677   19299 main.go:141] libmachine: Making call to close driver server
	I0919 22:15:11.845698   19299 main.go:141] libmachine: Making call to close driver server
	I0919 22:15:11.845707   19299 main.go:141] libmachine: (addons-266998) Calling .Close
	I0919 22:15:11.845743   19299 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (5.122547385s)
	I0919 22:15:11.845771   19299 start.go:976] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0919 22:15:11.845793   19299 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.1226721s)
	I0919 22:15:11.845927   19299 main.go:141] libmachine: (addons-266998) DBG | Closing plugin on server side
	I0919 22:15:11.845943   19299 main.go:141] libmachine: Successfully made call to close driver server
	I0919 22:15:11.845954   19299 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 22:15:11.845963   19299 main.go:141] libmachine: Making call to close driver server
	I0919 22:15:11.845967   19299 main.go:141] libmachine: Successfully made call to close driver server
	I0919 22:15:11.845970   19299 main.go:141] libmachine: (addons-266998) Calling .Close
	I0919 22:15:11.845975   19299 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 22:15:11.845983   19299 main.go:141] libmachine: Making call to close driver server
	I0919 22:15:11.845992   19299 main.go:141] libmachine: (addons-266998) Calling .Close
	I0919 22:15:11.846285   19299 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.092719412s)
	I0919 22:15:11.846286   19299 main.go:141] libmachine: Successfully made call to close driver server
	I0919 22:15:11.846309   19299 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 22:15:11.846313   19299 main.go:141] libmachine: Making call to close driver server
	I0919 22:15:11.846323   19299 main.go:141] libmachine: (addons-266998) Calling .Close
	I0919 22:15:11.846369   19299 main.go:141] libmachine: (addons-266998) DBG | Closing plugin on server side
	I0919 22:15:11.846394   19299 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.092485345s)
	I0919 22:15:11.846411   19299 main.go:141] libmachine: Successfully made call to close driver server
	I0919 22:15:11.846427   19299 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 22:15:11.846419   19299 main.go:141] libmachine: Making call to close driver server
	I0919 22:15:11.846455   19299 main.go:141] libmachine: (addons-266998) Calling .Close
	I0919 22:15:11.846458   19299 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (5.054246071s)
	I0919 22:15:11.846471   19299 main.go:141] libmachine: Making call to close driver server
	I0919 22:15:11.846479   19299 main.go:141] libmachine: (addons-266998) Calling .Close
	I0919 22:15:11.845708   19299 main.go:141] libmachine: (addons-266998) Calling .Close
	I0919 22:15:11.846898   19299 main.go:141] libmachine: (addons-266998) DBG | Closing plugin on server side
	I0919 22:15:11.846922   19299 main.go:141] libmachine: Successfully made call to close driver server
	I0919 22:15:11.846929   19299 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 22:15:11.846930   19299 main.go:141] libmachine: (addons-266998) DBG | Closing plugin on server side
	I0919 22:15:11.846937   19299 main.go:141] libmachine: Making call to close driver server
	I0919 22:15:11.846944   19299 main.go:141] libmachine: (addons-266998) Calling .Close
	I0919 22:15:11.846960   19299 main.go:141] libmachine: Successfully made call to close driver server
	I0919 22:15:11.846967   19299 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 22:15:11.846974   19299 main.go:141] libmachine: Making call to close driver server
	I0919 22:15:11.846980   19299 main.go:141] libmachine: (addons-266998) Calling .Close
	I0919 22:15:11.847345   19299 main.go:141] libmachine: (addons-266998) DBG | Closing plugin on server side
	I0919 22:15:11.847376   19299 main.go:141] libmachine: (addons-266998) DBG | Closing plugin on server side
	I0919 22:15:11.847410   19299 main.go:141] libmachine: Successfully made call to close driver server
	I0919 22:15:11.847418   19299 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 22:15:11.847861   19299 node_ready.go:35] waiting up to 6m0s for node "addons-266998" to be "Ready" ...
	I0919 22:15:11.848002   19299 main.go:141] libmachine: (addons-266998) DBG | Closing plugin on server side
	I0919 22:15:11.848051   19299 main.go:141] libmachine: Successfully made call to close driver server
	I0919 22:15:11.848062   19299 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 22:15:11.848069   19299 main.go:141] libmachine: Making call to close driver server
	I0919 22:15:11.848076   19299 main.go:141] libmachine: (addons-266998) Calling .Close
	I0919 22:15:11.848079   19299 main.go:141] libmachine: Successfully made call to close driver server
	I0919 22:15:11.848089   19299 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 22:15:11.848365   19299 main.go:141] libmachine: (addons-266998) DBG | Closing plugin on server side
	I0919 22:15:11.848391   19299 main.go:141] libmachine: Successfully made call to close driver server
	I0919 22:15:11.848411   19299 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 22:15:11.848861   19299 main.go:141] libmachine: Successfully made call to close driver server
	I0919 22:15:11.848874   19299 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 22:15:11.848888   19299 main.go:141] libmachine: Making call to close driver server
	I0919 22:15:11.848896   19299 main.go:141] libmachine: (addons-266998) Calling .Close
	I0919 22:15:11.849057   19299 main.go:141] libmachine: Successfully made call to close driver server
	I0919 22:15:11.849072   19299 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 22:15:11.938932   19299 node_ready.go:49] node "addons-266998" is "Ready"
	I0919 22:15:11.938958   19299 node_ready.go:38] duration metric: took 91.055633ms for node "addons-266998" to be "Ready" ...
	I0919 22:15:11.938970   19299 api_server.go:52] waiting for apiserver process to appear ...
	I0919 22:15:11.939025   19299 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:15:12.481970   19299 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-266998" context rescaled to 1 replicas
	I0919 22:15:13.069171   19299 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0919 22:15:13.069218   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHHostname
	I0919 22:15:13.072431   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:15:13.072970   19299 main.go:141] libmachine: (addons-266998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:32:d6", ip: ""} in network mk-addons-266998: {Iface:virbr1 ExpiryTime:2025-09-19 23:14:35 +0000 UTC Type:0 Mac:52:54:00:25:32:d6 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-266998 Clientid:01:52:54:00:25:32:d6}
	I0919 22:15:13.073007   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined IP address 192.168.39.205 and MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:15:13.073227   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHPort
	I0919 22:15:13.073451   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHKeyPath
	I0919 22:15:13.073635   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHUsername
	I0919 22:15:13.073782   19299 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21594-14764/.minikube/machines/addons-266998/id_rsa Username:docker}
	I0919 22:15:13.396018   19299 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0919 22:15:13.484339   19299 addons.go:238] Setting addon gcp-auth=true in "addons-266998"
	I0919 22:15:13.484391   19299 host.go:66] Checking if "addons-266998" exists ...
	I0919 22:15:13.484688   19299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 22:15:13.484717   19299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 22:15:13.498296   19299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41327
	I0919 22:15:13.498800   19299 main.go:141] libmachine: () Calling .GetVersion
	I0919 22:15:13.499232   19299 main.go:141] libmachine: Using API Version  1
	I0919 22:15:13.499258   19299 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 22:15:13.499628   19299 main.go:141] libmachine: () Calling .GetMachineName
	I0919 22:15:13.500087   19299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 22:15:13.500116   19299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 22:15:13.514743   19299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34405
	I0919 22:15:13.515271   19299 main.go:141] libmachine: () Calling .GetVersion
	I0919 22:15:13.515845   19299 main.go:141] libmachine: Using API Version  1
	I0919 22:15:13.515865   19299 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 22:15:13.516197   19299 main.go:141] libmachine: () Calling .GetMachineName
	I0919 22:15:13.516433   19299 main.go:141] libmachine: (addons-266998) Calling .GetState
	I0919 22:15:13.518214   19299 main.go:141] libmachine: (addons-266998) Calling .DriverName
	I0919 22:15:13.518451   19299 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0919 22:15:13.518474   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHHostname
	I0919 22:15:13.521876   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:15:13.522324   19299 main.go:141] libmachine: (addons-266998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:32:d6", ip: ""} in network mk-addons-266998: {Iface:virbr1 ExpiryTime:2025-09-19 23:14:35 +0000 UTC Type:0 Mac:52:54:00:25:32:d6 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:addons-266998 Clientid:01:52:54:00:25:32:d6}
	I0919 22:15:13.522355   19299 main.go:141] libmachine: (addons-266998) DBG | domain addons-266998 has defined IP address 192.168.39.205 and MAC address 52:54:00:25:32:d6 in network mk-addons-266998
	I0919 22:15:13.522580   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHPort
	I0919 22:15:13.522786   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHKeyPath
	I0919 22:15:13.522970   19299 main.go:141] libmachine: (addons-266998) Calling .GetSSHUsername
	I0919 22:15:13.523102   19299 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21594-14764/.minikube/machines/addons-266998/id_rsa Username:docker}
	I0919 22:15:15.666181   19299 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.653649271s)
	I0919 22:15:15.666234   19299 main.go:141] libmachine: Making call to close driver server
	I0919 22:15:15.666237   19299 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.318770074s)
	I0919 22:15:15.666279   19299 main.go:141] libmachine: Making call to close driver server
	I0919 22:15:15.666295   19299 main.go:141] libmachine: (addons-266998) Calling .Close
	I0919 22:15:15.666246   19299 main.go:141] libmachine: (addons-266998) Calling .Close
	I0919 22:15:15.666313   19299 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (8.329135812s)
	I0919 22:15:15.666371   19299 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.276204374s)
	I0919 22:15:15.666390   19299 main.go:141] libmachine: Making call to close driver server
	I0919 22:15:15.666399   19299 main.go:141] libmachine: (addons-266998) Calling .Close
	W0919 22:15:15.666397   19299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 22:15:15.666440   19299 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.921993721s)
	I0919 22:15:15.666453   19299 retry.go:31] will retry after 357.049015ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 22:15:15.666468   19299 main.go:141] libmachine: Making call to close driver server
	I0919 22:15:15.666485   19299 main.go:141] libmachine: (addons-266998) Calling .Close
	I0919 22:15:15.666525   19299 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.760130778s)
	I0919 22:15:15.666548   19299 main.go:141] libmachine: Making call to close driver server
	I0919 22:15:15.666555   19299 main.go:141] libmachine: (addons-266998) DBG | Closing plugin on server side
	I0919 22:15:15.666557   19299 main.go:141] libmachine: Successfully made call to close driver server
	I0919 22:15:15.666557   19299 main.go:141] libmachine: (addons-266998) Calling .Close
	I0919 22:15:15.666568   19299 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 22:15:15.666577   19299 main.go:141] libmachine: Making call to close driver server
	I0919 22:15:15.666580   19299 main.go:141] libmachine: Successfully made call to close driver server
	I0919 22:15:15.666583   19299 main.go:141] libmachine: (addons-266998) Calling .Close
	I0919 22:15:15.666590   19299 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 22:15:15.666598   19299 main.go:141] libmachine: Making call to close driver server
	I0919 22:15:15.666605   19299 main.go:141] libmachine: (addons-266998) Calling .Close
	I0919 22:15:15.666580   19299 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.082779677s)
	I0919 22:15:15.666652   19299 main.go:141] libmachine: Making call to close driver server
	I0919 22:15:15.666662   19299 main.go:141] libmachine: (addons-266998) Calling .Close
	I0919 22:15:15.669116   19299 main.go:141] libmachine: (addons-266998) DBG | Closing plugin on server side
	I0919 22:15:15.669164   19299 main.go:141] libmachine: (addons-266998) DBG | Closing plugin on server side
	I0919 22:15:15.669196   19299 main.go:141] libmachine: Successfully made call to close driver server
	I0919 22:15:15.669203   19299 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 22:15:15.669213   19299 addons.go:479] Verifying addon ingress=true in "addons-266998"
	I0919 22:15:15.669309   19299 main.go:141] libmachine: Successfully made call to close driver server
	I0919 22:15:15.669545   19299 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 22:15:15.669651   19299 main.go:141] libmachine: (addons-266998) DBG | Closing plugin on server side
	I0919 22:15:15.669672   19299 main.go:141] libmachine: Successfully made call to close driver server
	I0919 22:15:15.669684   19299 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 22:15:15.669691   19299 main.go:141] libmachine: (addons-266998) DBG | Closing plugin on server side
	I0919 22:15:15.669694   19299 main.go:141] libmachine: Making call to close driver server
	I0919 22:15:15.669709   19299 main.go:141] libmachine: (addons-266998) Calling .Close
	I0919 22:15:15.669773   19299 main.go:141] libmachine: Successfully made call to close driver server
	I0919 22:15:15.669792   19299 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 22:15:15.669801   19299 main.go:141] libmachine: Making call to close driver server
	I0919 22:15:15.669804   19299 main.go:141] libmachine: Successfully made call to close driver server
	I0919 22:15:15.669815   19299 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 22:15:15.669825   19299 main.go:141] libmachine: Making call to close driver server
	I0919 22:15:15.669824   19299 main.go:141] libmachine: (addons-266998) DBG | Closing plugin on server side
	I0919 22:15:15.669833   19299 main.go:141] libmachine: (addons-266998) Calling .Close
	I0919 22:15:15.669893   19299 main.go:141] libmachine: Successfully made call to close driver server
	I0919 22:15:15.669900   19299 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 22:15:15.669908   19299 main.go:141] libmachine: Making call to close driver server
	I0919 22:15:15.669914   19299 main.go:141] libmachine: (addons-266998) Calling .Close
	I0919 22:15:15.669808   19299 main.go:141] libmachine: (addons-266998) Calling .Close
	I0919 22:15:15.671055   19299 out.go:179] * Verifying ingress addon...
	I0919 22:15:15.672104   19299 main.go:141] libmachine: (addons-266998) DBG | Closing plugin on server side
	I0919 22:15:15.672116   19299 main.go:141] libmachine: (addons-266998) DBG | Closing plugin on server side
	I0919 22:15:15.672104   19299 main.go:141] libmachine: (addons-266998) DBG | Closing plugin on server side
	I0919 22:15:15.672150   19299 main.go:141] libmachine: Successfully made call to close driver server
	I0919 22:15:15.672152   19299 main.go:141] libmachine: Successfully made call to close driver server
	I0919 22:15:15.672161   19299 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 22:15:15.672167   19299 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 22:15:15.672175   19299 main.go:141] libmachine: Successfully made call to close driver server
	I0919 22:15:15.672183   19299 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 22:15:15.672192   19299 main.go:141] libmachine: Successfully made call to close driver server
	I0919 22:15:15.672194   19299 addons.go:479] Verifying addon registry=true in "addons-266998"
	I0919 22:15:15.672200   19299 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 22:15:15.672237   19299 addons.go:479] Verifying addon metrics-server=true in "addons-266998"
	I0919 22:15:15.672226   19299 main.go:141] libmachine: (addons-266998) DBG | Closing plugin on server side
	I0919 22:15:15.673282   19299 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-266998 service yakd-dashboard -n yakd-dashboard
	
	I0919 22:15:15.673310   19299 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0919 22:15:15.673345   19299 out.go:179] * Verifying registry addon...
	I0919 22:15:15.675170   19299 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0919 22:15:15.731517   19299 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0919 22:15:15.731539   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:15.734296   19299 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0919 22:15:15.734318   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:15.753534   19299 main.go:141] libmachine: Making call to close driver server
	I0919 22:15:15.753556   19299 main.go:141] libmachine: (addons-266998) Calling .Close
	I0919 22:15:15.753847   19299 main.go:141] libmachine: (addons-266998) DBG | Closing plugin on server side
	I0919 22:15:15.753883   19299 main.go:141] libmachine: Successfully made call to close driver server
	I0919 22:15:15.753893   19299 main.go:141] libmachine: Making call to close connection to plugin binary
	W0919 22:15:15.754002   19299 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0919 22:15:15.758198   19299 main.go:141] libmachine: Making call to close driver server
	I0919 22:15:15.758217   19299 main.go:141] libmachine: (addons-266998) Calling .Close
	I0919 22:15:15.758454   19299 main.go:141] libmachine: Successfully made call to close driver server
	I0919 22:15:15.758470   19299 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 22:15:15.831737   19299 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.79923652s)
	W0919 22:15:15.831771   19299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0919 22:15:15.831789   19299 retry.go:31] will retry after 285.657411ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0919 22:15:16.024114   19299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0919 22:15:16.117629   19299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0919 22:15:16.198519   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:16.198683   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:16.697498   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:16.710291   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:16.736953   19299 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.063415376s)
	I0919 22:15:16.736978   19299 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.797930271s)
	I0919 22:15:16.737009   19299 api_server.go:72] duration metric: took 11.2347172s to wait for apiserver process to appear ...
	I0919 22:15:16.737009   19299 main.go:141] libmachine: Making call to close driver server
	I0919 22:15:16.737019   19299 api_server.go:88] waiting for apiserver healthz status ...
	I0919 22:15:16.737024   19299 main.go:141] libmachine: (addons-266998) Calling .Close
	I0919 22:15:16.737032   19299 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.218564548s)
	I0919 22:15:16.737042   19299 api_server.go:253] Checking apiserver healthz at https://192.168.39.205:8443/healthz ...
	I0919 22:15:16.737325   19299 main.go:141] libmachine: Successfully made call to close driver server
	I0919 22:15:16.737340   19299 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 22:15:16.737360   19299 main.go:141] libmachine: (addons-266998) DBG | Closing plugin on server side
	I0919 22:15:16.737377   19299 main.go:141] libmachine: Making call to close driver server
	I0919 22:15:16.737397   19299 main.go:141] libmachine: (addons-266998) Calling .Close
	I0919 22:15:16.737717   19299 main.go:141] libmachine: Successfully made call to close driver server
	I0919 22:15:16.737740   19299 main.go:141] libmachine: (addons-266998) DBG | Closing plugin on server side
	I0919 22:15:16.737749   19299 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 22:15:16.737760   19299 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-266998"
	I0919 22:15:16.739294   19299 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0919 22:15:16.740127   19299 out.go:179] * Verifying csi-hostpath-driver addon...
	I0919 22:15:16.741632   19299 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0919 22:15:16.742310   19299 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0919 22:15:16.742651   19299 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0919 22:15:16.742666   19299 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0919 22:15:16.775941   19299 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0919 22:15:16.775966   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:16.787612   19299 api_server.go:279] https://192.168.39.205:8443/healthz returned 200:
	ok
	I0919 22:15:16.797024   19299 api_server.go:141] control plane version: v1.34.0
	I0919 22:15:16.797050   19299 api_server.go:131] duration metric: took 60.024001ms to wait for apiserver health ...
	I0919 22:15:16.797058   19299 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 22:15:16.832521   19299 system_pods.go:59] 20 kube-system pods found
	I0919 22:15:16.832575   19299 system_pods.go:61] "amd-gpu-device-plugin-7vf6w" [2c547168-e5a0-4407-9166-07bdef5312cc] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0919 22:15:16.832593   19299 system_pods.go:61] "coredns-66bc5c9577-6nz2f" [790a9a18-e498-42de-90c9-5868ade01dab] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:15:16.832607   19299 system_pods.go:61] "coredns-66bc5c9577-8m46p" [0ed181ba-513d-4ac3-8c12-c46574ac26b7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:15:16.832615   19299 system_pods.go:61] "csi-hostpath-attacher-0" [00559e38-7f43-4764-8cd4-d1b32a1ea528] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0919 22:15:16.832622   19299 system_pods.go:61] "csi-hostpath-resizer-0" [3863c7c6-aa3b-46cf-9ac5-2007ecc69f5f] Pending
	I0919 22:15:16.832631   19299 system_pods.go:61] "csi-hostpathplugin-rzhgp" [ccba4174-90bc-42c9-8552-a257fcb3e896] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0919 22:15:16.832643   19299 system_pods.go:61] "etcd-addons-266998" [f5cfbb4f-91f3-4bc3-b566-9e9ae4cb1d21] Running
	I0919 22:15:16.832654   19299 system_pods.go:61] "kube-apiserver-addons-266998" [e4aeb31e-1f55-4e23-85c7-2cebbd820ae9] Running
	I0919 22:15:16.832664   19299 system_pods.go:61] "kube-controller-manager-addons-266998" [27ad64c5-a1a4-4152-8afb-d5cc2564c178] Running
	I0919 22:15:16.832677   19299 system_pods.go:61] "kube-ingress-dns-minikube" [b6782948-68b4-4d0d-86a8-29ff87b98100] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0919 22:15:16.832684   19299 system_pods.go:61] "kube-proxy-hjc8c" [d2516043-8672-4a06-8be4-5a2bc2517230] Running
	I0919 22:15:16.832691   19299 system_pods.go:61] "kube-scheduler-addons-266998" [56768bf2-b3d9-4dcf-a753-d0ae41728b50] Running
	I0919 22:15:16.832698   19299 system_pods.go:61] "metrics-server-85b7d694d7-w2zlg" [5e422a9c-fc42-44b6-b9a0-5b52446522be] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 22:15:16.832707   19299 system_pods.go:61] "nvidia-device-plugin-daemonset-g7n82" [fa453c43-6ba3-4d31-87ea-6a4bd054a758] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0919 22:15:16.832741   19299 system_pods.go:61] "registry-66898fdd98-mqfsk" [53922a4c-8b51-430a-a161-b52ae6012395] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0919 22:15:16.832755   19299 system_pods.go:61] "registry-creds-764b6fb674-85489" [bf9ec5f0-3a3d-43f1-bb9e-c4d2c897dce1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0919 22:15:16.832766   19299 system_pods.go:61] "registry-proxy-fbm84" [0543b709-064a-421e-8de0-6bcf044aa6d9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0919 22:15:16.832773   19299 system_pods.go:61] "snapshot-controller-7d9fbc56b8-ffhjn" [c74c9d5b-1c71-487b-80a1-269782e9e2b7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0919 22:15:16.832782   19299 system_pods.go:61] "snapshot-controller-7d9fbc56b8-jj2ct" [b0403e7f-b580-45e4-9f0e-336ef2b11bbb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0919 22:15:16.832788   19299 system_pods.go:61] "storage-provisioner" [f5bc225a-996d-426c-9d33-0dc4b9a28f18] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 22:15:16.832797   19299 system_pods.go:74] duration metric: took 35.732945ms to wait for pod list to return data ...
	I0919 22:15:16.832810   19299 default_sa.go:34] waiting for default service account to be created ...
	I0919 22:15:16.839738   19299 default_sa.go:45] found service account: "default"
	I0919 22:15:16.839759   19299 default_sa.go:55] duration metric: took 6.942245ms for default service account to be created ...
	I0919 22:15:16.839768   19299 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 22:15:16.859604   19299 system_pods.go:86] 20 kube-system pods found
	I0919 22:15:16.859632   19299 system_pods.go:89] "amd-gpu-device-plugin-7vf6w" [2c547168-e5a0-4407-9166-07bdef5312cc] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0919 22:15:16.859639   19299 system_pods.go:89] "coredns-66bc5c9577-6nz2f" [790a9a18-e498-42de-90c9-5868ade01dab] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:15:16.859647   19299 system_pods.go:89] "coredns-66bc5c9577-8m46p" [0ed181ba-513d-4ac3-8c12-c46574ac26b7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:15:16.859653   19299 system_pods.go:89] "csi-hostpath-attacher-0" [00559e38-7f43-4764-8cd4-d1b32a1ea528] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0919 22:15:16.859658   19299 system_pods.go:89] "csi-hostpath-resizer-0" [3863c7c6-aa3b-46cf-9ac5-2007ecc69f5f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0919 22:15:16.859663   19299 system_pods.go:89] "csi-hostpathplugin-rzhgp" [ccba4174-90bc-42c9-8552-a257fcb3e896] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0919 22:15:16.859668   19299 system_pods.go:89] "etcd-addons-266998" [f5cfbb4f-91f3-4bc3-b566-9e9ae4cb1d21] Running
	I0919 22:15:16.859671   19299 system_pods.go:89] "kube-apiserver-addons-266998" [e4aeb31e-1f55-4e23-85c7-2cebbd820ae9] Running
	I0919 22:15:16.859675   19299 system_pods.go:89] "kube-controller-manager-addons-266998" [27ad64c5-a1a4-4152-8afb-d5cc2564c178] Running
	I0919 22:15:16.859679   19299 system_pods.go:89] "kube-ingress-dns-minikube" [b6782948-68b4-4d0d-86a8-29ff87b98100] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0919 22:15:16.859682   19299 system_pods.go:89] "kube-proxy-hjc8c" [d2516043-8672-4a06-8be4-5a2bc2517230] Running
	I0919 22:15:16.859686   19299 system_pods.go:89] "kube-scheduler-addons-266998" [56768bf2-b3d9-4dcf-a753-d0ae41728b50] Running
	I0919 22:15:16.859692   19299 system_pods.go:89] "metrics-server-85b7d694d7-w2zlg" [5e422a9c-fc42-44b6-b9a0-5b52446522be] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 22:15:16.859700   19299 system_pods.go:89] "nvidia-device-plugin-daemonset-g7n82" [fa453c43-6ba3-4d31-87ea-6a4bd054a758] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0919 22:15:16.859705   19299 system_pods.go:89] "registry-66898fdd98-mqfsk" [53922a4c-8b51-430a-a161-b52ae6012395] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0919 22:15:16.859712   19299 system_pods.go:89] "registry-creds-764b6fb674-85489" [bf9ec5f0-3a3d-43f1-bb9e-c4d2c897dce1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0919 22:15:16.859716   19299 system_pods.go:89] "registry-proxy-fbm84" [0543b709-064a-421e-8de0-6bcf044aa6d9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0919 22:15:16.859734   19299 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ffhjn" [c74c9d5b-1c71-487b-80a1-269782e9e2b7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0919 22:15:16.859740   19299 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jj2ct" [b0403e7f-b580-45e4-9f0e-336ef2b11bbb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0919 22:15:16.859745   19299 system_pods.go:89] "storage-provisioner" [f5bc225a-996d-426c-9d33-0dc4b9a28f18] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 22:15:16.859759   19299 system_pods.go:126] duration metric: took 19.985956ms to wait for k8s-apps to be running ...
	I0919 22:15:16.859773   19299 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 22:15:16.859812   19299 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:15:17.068638   19299 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0919 22:15:17.068667   19299 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0919 22:15:17.182474   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:17.183847   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:17.241177   19299 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0919 22:15:17.241203   19299 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0919 22:15:17.284129   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:17.408922   19299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0919 22:15:17.685616   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:17.687927   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:17.781988   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:18.184599   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:18.184636   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:18.252835   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:18.679074   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:18.684874   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:18.750355   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:19.239577   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:19.271452   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:19.341152   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:19.683313   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:19.692096   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:19.758514   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:19.776661   19299 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.752503892s)
	I0919 22:15:19.776723   19299 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.916882888s)
	I0919 22:15:19.776761   19299 system_svc.go:56] duration metric: took 2.916984086s WaitForService to wait for kubelet
	I0919 22:15:19.776786   19299 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.367839675s)
	I0919 22:15:19.776776   19299 kubeadm.go:578] duration metric: took 14.274485184s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:15:19.776803   19299 node_conditions.go:102] verifying NodePressure condition ...
	I0919 22:15:19.776816   19299 main.go:141] libmachine: Making call to close driver server
	I0919 22:15:19.776834   19299 main.go:141] libmachine: (addons-266998) Calling .Close
	W0919 22:15:19.776721   19299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 22:15:19.776895   19299 retry.go:31] will retry after 298.435534ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 22:15:19.776661   19299 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.658980457s)
	I0919 22:15:19.776998   19299 main.go:141] libmachine: Making call to close driver server
	I0919 22:15:19.777021   19299 main.go:141] libmachine: (addons-266998) Calling .Close
	I0919 22:15:19.777148   19299 main.go:141] libmachine: Successfully made call to close driver server
	I0919 22:15:19.777161   19299 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 22:15:19.777169   19299 main.go:141] libmachine: Making call to close driver server
	I0919 22:15:19.777175   19299 main.go:141] libmachine: (addons-266998) Calling .Close
	I0919 22:15:19.777269   19299 main.go:141] libmachine: Successfully made call to close driver server
	I0919 22:15:19.777299   19299 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 22:15:19.777309   19299 main.go:141] libmachine: Making call to close driver server
	I0919 22:15:19.777316   19299 main.go:141] libmachine: (addons-266998) Calling .Close
	I0919 22:15:19.777468   19299 main.go:141] libmachine: Successfully made call to close driver server
	I0919 22:15:19.777488   19299 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 22:15:19.777587   19299 main.go:141] libmachine: Successfully made call to close driver server
	I0919 22:15:19.777604   19299 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 22:15:19.777616   19299 main.go:141] libmachine: (addons-266998) DBG | Closing plugin on server side
	I0919 22:15:19.778388   19299 addons.go:479] Verifying addon gcp-auth=true in "addons-266998"
	I0919 22:15:19.780102   19299 out.go:179] * Verifying gcp-auth addon...
	I0919 22:15:19.781889   19299 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0919 22:15:19.784916   19299 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0919 22:15:19.784946   19299 node_conditions.go:123] node cpu capacity is 2
	I0919 22:15:19.784971   19299 node_conditions.go:105] duration metric: took 8.162018ms to run NodePressure ...
	I0919 22:15:19.784984   19299 start.go:241] waiting for startup goroutines ...
	I0919 22:15:19.787149   19299 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0919 22:15:19.787167   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:20.075465   19299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0919 22:15:20.184068   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:20.189011   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:20.249639   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:20.286885   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:20.680302   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:20.684769   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:20.750006   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:20.789042   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:21.181925   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:21.186339   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:21.247607   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:21.289328   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:21.582605   19299 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.507098618s)
	W0919 22:15:21.582651   19299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 22:15:21.582667   19299 retry.go:31] will retry after 461.085302ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 22:15:21.681056   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:21.681241   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:21.748400   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:21.790338   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:22.044639   19299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0919 22:15:22.181849   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:22.182823   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:22.250419   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:22.285058   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:22.682466   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:22.683134   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:22.747889   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:22.789256   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:23.150215   19299 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.105535289s)
	W0919 22:15:23.150264   19299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 22:15:23.150288   19299 retry.go:31] will retry after 627.377326ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 22:15:23.180441   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:23.183202   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:23.252348   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:23.287386   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:23.679161   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:23.680543   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:23.748478   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:23.778571   19299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0919 22:15:23.786682   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:24.179470   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:24.179692   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:24.250343   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:24.288040   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:24.679811   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:24.680264   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:24.748625   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:24.790711   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:24.854132   19299 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.075521427s)
	W0919 22:15:24.854164   19299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 22:15:24.854180   19299 retry.go:31] will retry after 1.340231814s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 22:15:25.181132   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:25.181775   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:25.246803   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:25.287586   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:25.678794   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:25.680483   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:25.758269   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:25.820290   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:26.180983   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:26.182875   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:26.194906   19299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0919 22:15:26.248571   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:26.286459   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:26.680461   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:26.682676   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:26.747650   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:26.788492   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:27.182431   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:27.183420   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:27.247238   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:27.287324   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:27.321915   19299 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.126955651s)
	W0919 22:15:27.321956   19299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 22:15:27.321977   19299 retry.go:31] will retry after 1.299554322s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 22:15:27.681885   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:27.682952   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:27.750447   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:27.789372   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:28.180349   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:28.180572   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:28.250019   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:28.286144   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:28.622233   19299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0919 22:15:28.677372   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:28.680872   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:28.749397   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:28.787713   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:29.177070   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:29.179927   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:29.252292   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:29.285811   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:29.765182   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:29.765619   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:29.765989   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:29.767550   19299 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.145281822s)
	W0919 22:15:29.767588   19299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 22:15:29.767609   19299 retry.go:31] will retry after 3.422579695s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 22:15:29.785975   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:30.183472   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:30.185915   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:30.245944   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:30.286751   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:30.679099   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:30.680305   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:30.749423   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:30.787622   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:31.575176   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:31.575344   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:31.578180   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:31.578366   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:31.680181   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:31.681295   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:31.747662   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:31.785498   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:32.179878   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:32.179970   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:32.247214   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:32.286449   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:32.680400   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:32.680742   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:32.750681   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:32.789083   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:33.180285   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:33.181453   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:33.190542   19299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0919 22:15:33.249175   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:33.284458   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:33.682610   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:33.682684   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:33.749300   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:33.787255   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0919 22:15:34.063439   19299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 22:15:34.063476   19299 retry.go:31] will retry after 3.821276311s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 22:15:34.180837   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:34.185268   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:34.248000   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:34.285865   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:34.681905   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:34.681961   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:34.746332   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:34.789917   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:35.180076   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:35.181760   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:35.248544   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:35.288752   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:35.681573   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:35.681851   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:35.748580   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:35.785881   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:36.178099   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:36.180353   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:36.249621   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:36.289219   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:36.679586   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:36.681562   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:36.747444   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:36.786113   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:37.179662   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:37.181559   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:37.246313   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:37.285269   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:37.831757   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:37.835597   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:37.835872   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:37.836285   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:37.885361   19299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0919 22:15:38.177173   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:38.183023   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:38.249179   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:38.293277   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:38.679849   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:38.679950   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:38.748527   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:38.786069   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0919 22:15:38.790884   19299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 22:15:38.790910   19299 retry.go:31] will retry after 4.328635275s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 22:15:39.178500   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:39.179639   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:39.278680   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:39.285322   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:39.680230   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:39.681526   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:39.748754   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:39.785963   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:40.181939   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:40.182105   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:40.246637   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:40.285977   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:40.687459   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:40.687805   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:40.758247   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:40.856436   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:41.178023   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:41.179238   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:41.248340   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:41.285470   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:41.678309   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:41.680140   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:41.781114   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:41.785746   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:42.179450   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:42.179480   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:42.246831   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:42.285437   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:42.679507   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:42.679644   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:42.747774   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:42.788508   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:43.120450   19299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0919 22:15:43.178954   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:43.183134   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:43.249774   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:43.289171   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:43.677830   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:43.680469   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:43.749409   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:43.787382   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0919 22:15:44.106140   19299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 22:15:44.106188   19299 retry.go:31] will retry after 6.747832414s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 22:15:44.180589   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:44.182413   19299 kapi.go:107] duration metric: took 28.507239341s to wait for kubernetes.io/minikube-addons=registry ...
	I0919 22:15:44.249953   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:44.287222   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:44.678026   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:44.746200   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:44.787081   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:45.177829   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:45.246519   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:45.286877   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:45.678431   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:45.751925   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:45.791496   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:46.179872   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:46.246310   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:46.289771   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:46.680937   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:46.750801   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:46.787575   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:47.180317   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:47.287997   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:47.294126   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:47.678948   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:47.749990   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:47.788579   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:48.177767   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:48.247875   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:48.289525   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:48.681895   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:48.752027   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:48.785813   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:49.177892   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:49.249532   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:49.292520   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:49.677678   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:49.747017   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:49.786443   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:50.178414   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:50.247835   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:50.285688   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:50.676784   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:50.746306   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:50.786551   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:50.854689   19299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0919 22:15:51.178711   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:51.254695   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:51.286664   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:51.678917   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:51.751123   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:51.786007   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:52.180649   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:52.249523   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:52.293707   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:52.319488   19299 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.464714183s)
	W0919 22:15:52.319528   19299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 22:15:52.319548   19299 retry.go:31] will retry after 20.514086761s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 22:15:52.684699   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:52.751771   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:52.788000   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:53.178680   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:53.250973   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:53.288100   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:53.678689   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:53.746617   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:53.785919   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:54.181965   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:54.247566   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:54.287888   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:54.679705   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:54.751288   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:54.785017   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:55.184201   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:55.246635   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:55.286520   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:55.678399   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:55.747782   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:55.786832   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:56.180403   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:56.245477   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:56.285552   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:56.679277   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:56.751805   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:56.788766   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:57.178092   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:57.247883   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:57.287264   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:57.681437   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:57.748142   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:57.786038   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:58.325071   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:58.325623   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:58.325670   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:58.677846   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:58.749139   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:58.786998   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:59.178878   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:59.280797   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:59.286510   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:59.678617   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:59.746591   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:59.786486   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:00.177494   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:00.249365   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:00.288254   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:00.679488   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:00.749383   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:00.786839   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:01.183754   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:01.247484   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:01.286011   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:01.678806   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:01.747361   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:01.787872   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:02.182926   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:02.250025   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:02.286375   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:02.678031   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:02.746906   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:02.785930   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:03.181284   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:03.247758   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:03.373225   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:03.678803   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:03.749327   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:03.788045   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:04.181709   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:04.287821   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:04.290512   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:04.678001   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:04.747466   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:04.786422   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:05.177050   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:05.249835   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:05.285435   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:05.678765   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:05.747593   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:05.786327   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:06.177652   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:06.246194   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:06.285771   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:06.679767   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:06.746976   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:06.788136   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:07.179689   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:07.246790   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:07.286849   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:07.677694   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:07.747742   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:07.788353   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:08.177905   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:08.246647   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:08.288524   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:08.677480   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:08.747597   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:08.786779   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:09.179585   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:09.530441   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:09.530709   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:09.679800   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:09.749815   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:09.791642   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:10.178450   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:10.245656   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:10.287304   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:10.680332   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:10.749387   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:10.796351   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:11.177460   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:11.249877   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:11.288158   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:11.677941   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:11.747143   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:11.786949   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:12.178305   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:12.280798   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:12.286130   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:12.678634   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:12.749333   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:12.785757   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:12.833822   19299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0919 22:16:13.179557   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:13.252075   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:13.285636   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:13.678784   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:13.778921   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:13.785368   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:13.858583   19299 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.02471841s)
	W0919 22:16:13.858632   19299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 22:16:13.858661   19299 retry.go:31] will retry after 17.799631073s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 22:16:14.178431   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:14.249175   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:14.289415   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:14.678592   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:14.746596   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:14.788425   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:15.181044   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:15.282828   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:15.288251   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:15.678474   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:15.761504   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:15.809420   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:16.179294   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:16.248457   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:16.287666   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:16.679668   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:16.748245   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:16.787221   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:17.182245   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:17.251388   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:17.286276   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:17.679454   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:17.746639   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:17.788093   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:18.178193   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:18.277791   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:18.285467   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:18.678522   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:18.746860   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:18.789632   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:19.186070   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:19.696918   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:19.701267   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:19.701347   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:19.748995   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:19.791416   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:20.177690   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:20.246302   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:20.285416   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:20.676890   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:20.749842   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:20.787655   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:21.178143   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:21.246504   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:21.285508   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:21.677818   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:21.747100   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:21.785634   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:22.179624   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:22.246608   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:22.287060   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:22.678944   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:22.749940   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:22.785676   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:23.180753   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:23.246542   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:23.286418   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:23.678166   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:23.748374   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:23.788501   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:24.181179   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:24.248964   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:24.288395   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:24.680692   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:24.749682   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:24.786531   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:25.177553   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:25.246929   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:25.285765   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:25.681162   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:25.749346   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:25.789678   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:26.177680   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:26.246461   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:26.285365   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:26.840315   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:26.843550   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:26.843901   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:27.177670   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:27.251654   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:27.286651   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:27.683820   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:27.757809   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:27.787645   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:28.177968   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:28.247260   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:28.285686   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:28.677701   19299 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:28.746451   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:28.785383   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:29.179319   19299 kapi.go:107] duration metric: took 1m13.506009691s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0919 22:16:29.283739   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:29.294292   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:29.748877   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:29.788792   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:30.248272   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:30.288772   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:30.776888   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:30.786806   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:31.250443   19299 kapi.go:107] duration metric: took 1m14.508132162s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0919 22:16:31.285984   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:31.659484   19299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0919 22:16:31.796393   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:32.291635   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:32.772985   19299 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.113453102s)
	W0919 22:16:32.773025   19299 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 22:16:32.773090   19299 main.go:141] libmachine: Making call to close driver server
	I0919 22:16:32.773104   19299 main.go:141] libmachine: (addons-266998) Calling .Close
	I0919 22:16:32.773362   19299 main.go:141] libmachine: Successfully made call to close driver server
	I0919 22:16:32.773380   19299 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 22:16:32.773390   19299 main.go:141] libmachine: Making call to close driver server
	I0919 22:16:32.773398   19299 main.go:141] libmachine: (addons-266998) Calling .Close
	I0919 22:16:32.773659   19299 main.go:141] libmachine: (addons-266998) DBG | Closing plugin on server side
	I0919 22:16:32.773717   19299 main.go:141] libmachine: Successfully made call to close driver server
	I0919 22:16:32.773747   19299 main.go:141] libmachine: Making call to close connection to plugin binary
	W0919 22:16:32.773837   19299 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0919 22:16:32.786850   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:33.286698   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:33.786789   19299 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:34.286000   19299 kapi.go:107] duration metric: took 1m14.504106988s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0919 22:16:34.287574   19299 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-266998 cluster.
	I0919 22:16:34.289008   19299 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0919 22:16:34.290182   19299 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0919 22:16:34.291446   19299 out.go:179] * Enabled addons: ingress-dns, registry-creds, amd-gpu-device-plugin, nvidia-device-plugin, storage-provisioner, cloud-spanner, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0919 22:16:34.292710   19299 addons.go:514] duration metric: took 1m28.790367821s for enable addons: enabled=[ingress-dns registry-creds amd-gpu-device-plugin nvidia-device-plugin storage-provisioner cloud-spanner metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0919 22:16:34.292774   19299 start.go:246] waiting for cluster config update ...
	I0919 22:16:34.292791   19299 start.go:255] writing updated cluster config ...
	I0919 22:16:34.293047   19299 ssh_runner.go:195] Run: rm -f paused
	I0919 22:16:34.303232   19299 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 22:16:34.312824   19299 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6nz2f" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:16:34.322759   19299 pod_ready.go:94] pod "coredns-66bc5c9577-6nz2f" is "Ready"
	I0919 22:16:34.322792   19299 pod_ready.go:86] duration metric: took 9.941536ms for pod "coredns-66bc5c9577-6nz2f" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:16:34.325453   19299 pod_ready.go:83] waiting for pod "etcd-addons-266998" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:16:34.330626   19299 pod_ready.go:94] pod "etcd-addons-266998" is "Ready"
	I0919 22:16:34.330647   19299 pod_ready.go:86] duration metric: took 5.17552ms for pod "etcd-addons-266998" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:16:34.333582   19299 pod_ready.go:83] waiting for pod "kube-apiserver-addons-266998" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:16:34.338657   19299 pod_ready.go:94] pod "kube-apiserver-addons-266998" is "Ready"
	I0919 22:16:34.338676   19299 pod_ready.go:86] duration metric: took 5.077796ms for pod "kube-apiserver-addons-266998" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:16:34.341337   19299 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-266998" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:16:34.707545   19299 pod_ready.go:94] pod "kube-controller-manager-addons-266998" is "Ready"
	I0919 22:16:34.707569   19299 pod_ready.go:86] duration metric: took 366.216661ms for pod "kube-controller-manager-addons-266998" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:16:34.909124   19299 pod_ready.go:83] waiting for pod "kube-proxy-hjc8c" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:16:35.308031   19299 pod_ready.go:94] pod "kube-proxy-hjc8c" is "Ready"
	I0919 22:16:35.308054   19299 pod_ready.go:86] duration metric: took 398.888891ms for pod "kube-proxy-hjc8c" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:16:35.507865   19299 pod_ready.go:83] waiting for pod "kube-scheduler-addons-266998" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:16:35.907757   19299 pod_ready.go:94] pod "kube-scheduler-addons-266998" is "Ready"
	I0919 22:16:35.907795   19299 pod_ready.go:86] duration metric: took 399.903004ms for pod "kube-scheduler-addons-266998" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:16:35.907812   19299 pod_ready.go:40] duration metric: took 1.604532721s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 22:16:35.952982   19299 start.go:617] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0919 22:16:35.954931   19299 out.go:179] * Done! kubectl is now configured to use "addons-266998" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 19 22:19:41 addons-266998 crio[821]: time="2025-09-19 22:19:41.265239420Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a9982359-a0e6-4017-bec2-b6dd1ee490c4 name=/runtime.v1.RuntimeService/Version
	Sep 19 22:19:41 addons-266998 crio[821]: time="2025-09-19 22:19:41.266349878Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a56f2798-c558-4d8e-98fc-5dd45476edc3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 22:19:41 addons-266998 crio[821]: time="2025-09-19 22:19:41.267635211Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758320381267603776,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596879,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a56f2798-c558-4d8e-98fc-5dd45476edc3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 22:19:41 addons-266998 crio[821]: time="2025-09-19 22:19:41.268556765Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dc8ee7e6-9213-4cb2-9cab-53325028b7c1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 22:19:41 addons-266998 crio[821]: time="2025-09-19 22:19:41.268786147Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dc8ee7e6-9213-4cb2-9cab-53325028b7c1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 22:19:41 addons-266998 crio[821]: time="2025-09-19 22:19:41.269243313Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c29e5c8b1346c496bdb9941fbca08dfd1c55d92fba1a8a112e39ff035406f957,PodSandboxId:52ff26cc4f1b58693a710cbf91f46cc5b9cee5ca719343b4714d24033a62db91,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9,State:CONTAINER_RUNNING,CreatedAt:1758320237017108194,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 71e323d6-d37d-4ac6-88ea-2a015c817d71,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dba749dfb8baec98e6b4b6535ec99f089cf83f691f989f08c7b1e6c5a2ee4cfb,PodSandboxId:a8c1ab0735000e63cac9b25bc83ebb1a7ea601b4ce79b830246e565d072aa8fa,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1758320198291959295,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ad155804-0aa9-4ac7-b063-6258e2f3e249,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a96a7ca2027442e472eb5bb0563efe24c25360f927a8a52deecdc826bf935a05,PodSandboxId:f528e122d97ef8ecf74ca5206f6192e6336be94df30bf010297bbe7de57dde7b,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1758320188486228221,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-nnd8q,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ed57d5a3-00a9-4f71-a36c-c9439daedaaf,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:58cd57ebb29a77aab16f4f547a550fd4310f42bfc23cf7d2fee79fd2bd7c8988,PodSandboxId:56bcb4c3289089ecb0be64a6d6567baf006f407ea6c8fa33668ba6a7afdfc7d1,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Sta
te:CONTAINER_EXITED,CreatedAt:1758320168750389398,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-5p8nh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5023eb94-efc8-4f11-9d8b-7662258db69d,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0099e4b3191e3811d53b1f06ff04f4632de4f6a40b1297e943847a1d740ba9ce,PodSandboxId:382b456303fb342c30b90f8553e42a7a3d756b440d1a074c22c4d9da78e9b515,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d1697145
49c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1758320167890345873,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-h8jc7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 58dc8ae5-11b7-4f11-817c-2313994529ec,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccc563cc237ad054c79f4ecd5bdf5a4289217b0fa6349f2175e228d3d148eb72,PodSandboxId:29bec77f11c04fc0919d841eea055b13c1364f792f154dffb2962a1d903d93fa,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:966
0a1727a97702fd80cef66da2e074d17d2e33bd086736d1ebdc7fc6ccd3441,State:CONTAINER_RUNNING,CreatedAt:1758320164142199535,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-wfgd6,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 427fd9ea-6757-43dc-bfb1-34e3b4dc7417,},Annotations:map[string]string{io.kubernetes.container.hash: 2616a42b,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98d9d9e6f7cfff4c3e5c54c4a6c4f05a7761e4868b756605ede455aa265f02bd,PodSandboxId:4ec7e33290fa9e7553eb0333079818e20a21b61080eccd16be30c637ad7fa621,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c88
0e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1758320159505194173,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6782948-68b4-4d0d-86a8-29ff87b98100,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:504c4a05c990c550fbe4407197ea7b7e8abd1d1c5e578c648db202c14f54dc71,PodSandboxId:dbf3eb2b087be5aad68c079f4f096b35e07f4f2361ccedf119ab3d1c65b18a61,Metadata:&ContainerMetadata{Name:storage
-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1758320114559330492,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5bc225a-996d-426c-9d33-0dc4b9a28f18,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efdccfb7c5871dd764703e68656f43252215f96ad04985fa1106848a93858264,PodSandboxId:b303add445c01e26711da81690ffa4c536937a0a8cd7336e6bb176c15dd5a37c,Metadata:&ContainerMetadata{Name:amd-gpu-device-plug
in,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1758320114763953414,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-7vf6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c547168-e5a0-4407-9166-07bdef5312cc,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09e757f8fa5ecd8d41c49111638a56c414bf4bb8de16990f5343a6f1d1f6023b,PodSandboxId:9ff3349d533e394011e8bcd3065e3b059fb65ebe06273382da8625d6186a5b02,Metadata:&Co
ntainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1758320107071484949,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6nz2f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 790a9a18-e498-42de-90c9-5868ade01dab,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c624489d26219ce761f4d2dd6908c6fb9ee5ac73f8f569a9407fd12737ae160,PodSandboxId:e92d1607c281647223226793641da14b877ffe1148e11ae993ed1a4b6d758cc8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1758320105474676303,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hjc8c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2516043-8672-4a06-8be4-5a2bc2517230,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf91fd6c2612dea5041a6513c568a17ba63f9c604ff832ceee8cf3fd6ae5b73b,PodSandboxId:f338b68106088a4d31b95b2b6e7efbfcf091c4f2d188028ab574196541d860a2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1758320094453178110,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-266998,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abd31d8c6c5b3c87a93850c2bb137398,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}]
,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c3b967f14d009ae807b8d8ba6a8a2ae66c50c92e471081fcaa206c35f811f96,PodSandboxId:4b0506489bc7b3150421016eba2519ab3bfd22808311f57a729a5cbf2e2ecd14,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1758320094476426263,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-266998,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 417c665c6f9bad38a10d4e7e3cc39fe5,},Annotations:map[string]string{io.kubernetes.container.hash
: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2638ce69f419d16cbf76cfb7aff1bc5b0e589818c17a637356fad4c803b8f036,PodSandboxId:212cf44bf1330f78a42e4edc79c6351039cb3eaae37900cce5377407e29134d1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1758320094436640768,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-266998,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: d964d8b9ef5b42cf8ecfff6d859fd523,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ce14efb751fc201ba06b34edfe39c446e22a33b3072fd8c74834811cdc40113,PodSandboxId:7c8c81ccddb555eb1dc68286757963747ee621b03517eebfdd141813649f41da,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1758320094387491187,Labels:map[string]string{io.kubernetes.container.name: kube-apiserve
r,io.kubernetes.pod.name: kube-apiserver-addons-266998,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86464c22dde37be494fc609505030838,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dc8ee7e6-9213-4cb2-9cab-53325028b7c1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 22:19:41 addons-266998 crio[821]: time="2025-09-19 22:19:41.297537834Z" level=debug msg="Content-Type from manifest GET is \"application/vnd.docker.distribution.manifest.list.v2+json\"" file="docker/docker_client.go:964"
	Sep 19 22:19:41 addons-266998 crio[821]: time="2025-09-19 22:19:41.298751208Z" level=debug msg="Using SQLite blob info cache at /var/lib/containers/cache/blob-info-cache-v1.sqlite" file="blobinfocache/default.go:74"
	Sep 19 22:19:41 addons-266998 crio[821]: time="2025-09-19 22:19:41.298947190Z" level=debug msg="Source is a manifest list; copying (only) instance sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 for current system" file="copy/copy.go:318"
	Sep 19 22:19:41 addons-266998 crio[821]: time="2025-09-19 22:19:41.299019041Z" level=debug msg="GET https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86" file="docker/docker_client.go:631"
	Sep 19 22:19:41 addons-266998 crio[821]: time="2025-09-19 22:19:41.312088579Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e8a58f13-abd1-49c3-b969-667b35c7f301 name=/runtime.v1.RuntimeService/Version
	Sep 19 22:19:41 addons-266998 crio[821]: time="2025-09-19 22:19:41.312161455Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e8a58f13-abd1-49c3-b969-667b35c7f301 name=/runtime.v1.RuntimeService/Version
	Sep 19 22:19:41 addons-266998 crio[821]: time="2025-09-19 22:19:41.313333068Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8873bb9c-4c68-4939-b9d9-b76c18619766 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 22:19:41 addons-266998 crio[821]: time="2025-09-19 22:19:41.314585740Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758320381314556960,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596879,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8873bb9c-4c68-4939-b9d9-b76c18619766 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 22:19:41 addons-266998 crio[821]: time="2025-09-19 22:19:41.315513893Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fbd88404-7aa5-45f1-a9fe-6007f1482640 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 22:19:41 addons-266998 crio[821]: time="2025-09-19 22:19:41.316037877Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fbd88404-7aa5-45f1-a9fe-6007f1482640 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 22:19:41 addons-266998 crio[821]: time="2025-09-19 22:19:41.316468475Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c29e5c8b1346c496bdb9941fbca08dfd1c55d92fba1a8a112e39ff035406f957,PodSandboxId:52ff26cc4f1b58693a710cbf91f46cc5b9cee5ca719343b4714d24033a62db91,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9,State:CONTAINER_RUNNING,CreatedAt:1758320237017108194,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 71e323d6-d37d-4ac6-88ea-2a015c817d71,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dba749dfb8baec98e6b4b6535ec99f089cf83f691f989f08c7b1e6c5a2ee4cfb,PodSandboxId:a8c1ab0735000e63cac9b25bc83ebb1a7ea601b4ce79b830246e565d072aa8fa,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1758320198291959295,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ad155804-0aa9-4ac7-b063-6258e2f3e249,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a96a7ca2027442e472eb5bb0563efe24c25360f927a8a52deecdc826bf935a05,PodSandboxId:f528e122d97ef8ecf74ca5206f6192e6336be94df30bf010297bbe7de57dde7b,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1758320188486228221,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-nnd8q,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ed57d5a3-00a9-4f71-a36c-c9439daedaaf,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:58cd57ebb29a77aab16f4f547a550fd4310f42bfc23cf7d2fee79fd2bd7c8988,PodSandboxId:56bcb4c3289089ecb0be64a6d6567baf006f407ea6c8fa33668ba6a7afdfc7d1,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Sta
te:CONTAINER_EXITED,CreatedAt:1758320168750389398,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-5p8nh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5023eb94-efc8-4f11-9d8b-7662258db69d,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0099e4b3191e3811d53b1f06ff04f4632de4f6a40b1297e943847a1d740ba9ce,PodSandboxId:382b456303fb342c30b90f8553e42a7a3d756b440d1a074c22c4d9da78e9b515,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d1697145
49c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1758320167890345873,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-h8jc7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 58dc8ae5-11b7-4f11-817c-2313994529ec,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccc563cc237ad054c79f4ecd5bdf5a4289217b0fa6349f2175e228d3d148eb72,PodSandboxId:29bec77f11c04fc0919d841eea055b13c1364f792f154dffb2962a1d903d93fa,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:966
0a1727a97702fd80cef66da2e074d17d2e33bd086736d1ebdc7fc6ccd3441,State:CONTAINER_RUNNING,CreatedAt:1758320164142199535,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-wfgd6,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 427fd9ea-6757-43dc-bfb1-34e3b4dc7417,},Annotations:map[string]string{io.kubernetes.container.hash: 2616a42b,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98d9d9e6f7cfff4c3e5c54c4a6c4f05a7761e4868b756605ede455aa265f02bd,PodSandboxId:4ec7e33290fa9e7553eb0333079818e20a21b61080eccd16be30c637ad7fa621,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c88
0e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1758320159505194173,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6782948-68b4-4d0d-86a8-29ff87b98100,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:504c4a05c990c550fbe4407197ea7b7e8abd1d1c5e578c648db202c14f54dc71,PodSandboxId:dbf3eb2b087be5aad68c079f4f096b35e07f4f2361ccedf119ab3d1c65b18a61,Metadata:&ContainerMetadata{Name:storage
-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1758320114559330492,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5bc225a-996d-426c-9d33-0dc4b9a28f18,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efdccfb7c5871dd764703e68656f43252215f96ad04985fa1106848a93858264,PodSandboxId:b303add445c01e26711da81690ffa4c536937a0a8cd7336e6bb176c15dd5a37c,Metadata:&ContainerMetadata{Name:amd-gpu-device-plug
in,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1758320114763953414,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-7vf6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c547168-e5a0-4407-9166-07bdef5312cc,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09e757f8fa5ecd8d41c49111638a56c414bf4bb8de16990f5343a6f1d1f6023b,PodSandboxId:9ff3349d533e394011e8bcd3065e3b059fb65ebe06273382da8625d6186a5b02,Metadata:&Co
ntainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1758320107071484949,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6nz2f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 790a9a18-e498-42de-90c9-5868ade01dab,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c624489d26219ce761f4d2dd6908c6fb9ee5ac73f8f569a9407fd12737ae160,PodSandboxId:e92d1607c281647223226793641da14b877ffe1148e11ae993ed1a4b6d758cc8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1758320105474676303,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hjc8c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2516043-8672-4a06-8be4-5a2bc2517230,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf91fd6c2612dea5041a6513c568a17ba63f9c604ff832ceee8cf3fd6ae5b73b,PodSandboxId:f338b68106088a4d31b95b2b6e7efbfcf091c4f2d188028ab574196541d860a2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1758320094453178110,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-266998,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abd31d8c6c5b3c87a93850c2bb137398,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}]
,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c3b967f14d009ae807b8d8ba6a8a2ae66c50c92e471081fcaa206c35f811f96,PodSandboxId:4b0506489bc7b3150421016eba2519ab3bfd22808311f57a729a5cbf2e2ecd14,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1758320094476426263,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-266998,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 417c665c6f9bad38a10d4e7e3cc39fe5,},Annotations:map[string]string{io.kubernetes.container.hash
: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2638ce69f419d16cbf76cfb7aff1bc5b0e589818c17a637356fad4c803b8f036,PodSandboxId:212cf44bf1330f78a42e4edc79c6351039cb3eaae37900cce5377407e29134d1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1758320094436640768,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-266998,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: d964d8b9ef5b42cf8ecfff6d859fd523,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ce14efb751fc201ba06b34edfe39c446e22a33b3072fd8c74834811cdc40113,PodSandboxId:7c8c81ccddb555eb1dc68286757963747ee621b03517eebfdd141813649f41da,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1758320094387491187,Labels:map[string]string{io.kubernetes.container.name: kube-apiserve
r,io.kubernetes.pod.name: kube-apiserver-addons-266998,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86464c22dde37be494fc609505030838,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fbd88404-7aa5-45f1-a9fe-6007f1482640 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 22:19:41 addons-266998 crio[821]: time="2025-09-19 22:19:41.329841828Z" level=debug msg="Too many requests to https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86: sleeping for 2.000000 seconds before next attempt" file="docker/docker_client.go:596"
	Sep 19 22:19:41 addons-266998 crio[821]: time="2025-09-19 22:19:41.361691412Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5e5522dd-0f0c-402b-800b-6250beee1263 name=/runtime.v1.RuntimeService/Version
	Sep 19 22:19:41 addons-266998 crio[821]: time="2025-09-19 22:19:41.362005923Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5e5522dd-0f0c-402b-800b-6250beee1263 name=/runtime.v1.RuntimeService/Version
	Sep 19 22:19:41 addons-266998 crio[821]: time="2025-09-19 22:19:41.363360410Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d0c69e3d-3bf5-42f5-b15c-54653d4a2e43 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 22:19:41 addons-266998 crio[821]: time="2025-09-19 22:19:41.364899862Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758320381364828211,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596879,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d0c69e3d-3bf5-42f5-b15c-54653d4a2e43 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 22:19:41 addons-266998 crio[821]: time="2025-09-19 22:19:41.365494613Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d34926fb-4f33-471a-a228-c3b2ca906001 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 22:19:41 addons-266998 crio[821]: time="2025-09-19 22:19:41.365548666Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d34926fb-4f33-471a-a228-c3b2ca906001 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 22:19:41 addons-266998 crio[821]: time="2025-09-19 22:19:41.366387694Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c29e5c8b1346c496bdb9941fbca08dfd1c55d92fba1a8a112e39ff035406f957,PodSandboxId:52ff26cc4f1b58693a710cbf91f46cc5b9cee5ca719343b4714d24033a62db91,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9,State:CONTAINER_RUNNING,CreatedAt:1758320237017108194,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 71e323d6-d37d-4ac6-88ea-2a015c817d71,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dba749dfb8baec98e6b4b6535ec99f089cf83f691f989f08c7b1e6c5a2ee4cfb,PodSandboxId:a8c1ab0735000e63cac9b25bc83ebb1a7ea601b4ce79b830246e565d072aa8fa,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1758320198291959295,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ad155804-0aa9-4ac7-b063-6258e2f3e249,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a96a7ca2027442e472eb5bb0563efe24c25360f927a8a52deecdc826bf935a05,PodSandboxId:f528e122d97ef8ecf74ca5206f6192e6336be94df30bf010297bbe7de57dde7b,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1bec18b3728e7489d64104958b9da774a7d1c7f0f8b2bae7330480b4891f6f56,State:CONTAINER_RUNNING,CreatedAt:1758320188486228221,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-9cc49f96f-nnd8q,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ed57d5a3-00a9-4f71-a36c-c9439daedaaf,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d75193f7,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:58cd57ebb29a77aab16f4f547a550fd4310f42bfc23cf7d2fee79fd2bd7c8988,PodSandboxId:56bcb4c3289089ecb0be64a6d6567baf006f407ea6c8fa33668ba6a7afdfc7d1,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65,Sta
te:CONTAINER_EXITED,CreatedAt:1758320168750389398,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-5p8nh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5023eb94-efc8-4f11-9d8b-7662258db69d,},Annotations:map[string]string{io.kubernetes.container.hash: b2514b62,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0099e4b3191e3811d53b1f06ff04f4632de4f6a40b1297e943847a1d740ba9ce,PodSandboxId:382b456303fb342c30b90f8553e42a7a3d756b440d1a074c22c4d9da78e9b515,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c217da6734db0feee6a8fa1d1697145
49c20bcb8c123ef218aec5d591e3fd65,State:CONTAINER_EXITED,CreatedAt:1758320167890345873,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-h8jc7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 58dc8ae5-11b7-4f11-817c-2313994529ec,},Annotations:map[string]string{io.kubernetes.container.hash: a3467dfb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccc563cc237ad054c79f4ecd5bdf5a4289217b0fa6349f2175e228d3d148eb72,PodSandboxId:29bec77f11c04fc0919d841eea055b13c1364f792f154dffb2962a1d903d93fa,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:966
0a1727a97702fd80cef66da2e074d17d2e33bd086736d1ebdc7fc6ccd3441,State:CONTAINER_RUNNING,CreatedAt:1758320164142199535,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-wfgd6,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 427fd9ea-6757-43dc-bfb1-34e3b4dc7417,},Annotations:map[string]string{io.kubernetes.container.hash: 2616a42b,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98d9d9e6f7cfff4c3e5c54c4a6c4f05a7761e4868b756605ede455aa265f02bd,PodSandboxId:4ec7e33290fa9e7553eb0333079818e20a21b61080eccd16be30c637ad7fa621,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c88
0e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1758320159505194173,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6782948-68b4-4d0d-86a8-29ff87b98100,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:504c4a05c990c550fbe4407197ea7b7e8abd1d1c5e578c648db202c14f54dc71,PodSandboxId:dbf3eb2b087be5aad68c079f4f096b35e07f4f2361ccedf119ab3d1c65b18a61,Metadata:&ContainerMetadata{Name:storage
-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1758320114559330492,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5bc225a-996d-426c-9d33-0dc4b9a28f18,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efdccfb7c5871dd764703e68656f43252215f96ad04985fa1106848a93858264,PodSandboxId:b303add445c01e26711da81690ffa4c536937a0a8cd7336e6bb176c15dd5a37c,Metadata:&ContainerMetadata{Name:amd-gpu-device-plug
in,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1758320114763953414,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-7vf6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c547168-e5a0-4407-9166-07bdef5312cc,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09e757f8fa5ecd8d41c49111638a56c414bf4bb8de16990f5343a6f1d1f6023b,PodSandboxId:9ff3349d533e394011e8bcd3065e3b059fb65ebe06273382da8625d6186a5b02,Metadata:&Co
ntainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1758320107071484949,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6nz2f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 790a9a18-e498-42de-90c9-5868ade01dab,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c624489d26219ce761f4d2dd6908c6fb9ee5ac73f8f569a9407fd12737ae160,PodSandboxId:e92d1607c281647223226793641da14b877ffe1148e11ae993ed1a4b6d758cc8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1758320105474676303,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hjc8c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2516043-8672-4a06-8be4-5a2bc2517230,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf91fd6c2612dea5041a6513c568a17ba63f9c604ff832ceee8cf3fd6ae5b73b,PodSandboxId:f338b68106088a4d31b95b2b6e7efbfcf091c4f2d188028ab574196541d860a2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1758320094453178110,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-266998,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abd31d8c6c5b3c87a93850c2bb137398,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}]
,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c3b967f14d009ae807b8d8ba6a8a2ae66c50c92e471081fcaa206c35f811f96,PodSandboxId:4b0506489bc7b3150421016eba2519ab3bfd22808311f57a729a5cbf2e2ecd14,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1758320094476426263,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-266998,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 417c665c6f9bad38a10d4e7e3cc39fe5,},Annotations:map[string]string{io.kubernetes.container.hash
: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2638ce69f419d16cbf76cfb7aff1bc5b0e589818c17a637356fad4c803b8f036,PodSandboxId:212cf44bf1330f78a42e4edc79c6351039cb3eaae37900cce5377407e29134d1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1758320094436640768,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-266998,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: d964d8b9ef5b42cf8ecfff6d859fd523,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ce14efb751fc201ba06b34edfe39c446e22a33b3072fd8c74834811cdc40113,PodSandboxId:7c8c81ccddb555eb1dc68286757963747ee621b03517eebfdd141813649f41da,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1758320094387491187,Labels:map[string]string{io.kubernetes.container.name: kube-apiserve
r,io.kubernetes.pod.name: kube-apiserver-addons-266998,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86464c22dde37be494fc609505030838,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d34926fb-4f33-471a-a228-c3b2ca906001 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c29e5c8b1346c       docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8                              2 minutes ago       Running             nginx                     0                   52ff26cc4f1b5       nginx
	dba749dfb8bae       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   a8c1ab0735000       busybox
	a96a7ca202744       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef             3 minutes ago       Running             controller                0                   f528e122d97ef       ingress-nginx-controller-9cc49f96f-nnd8q
	58cd57ebb29a7       8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65                                                             3 minutes ago       Exited              patch                     1                   56bcb4c328908       ingress-nginx-admission-patch-5p8nh
	0099e4b3191e3       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24   3 minutes ago       Exited              create                    0                   382b456303fb3       ingress-nginx-admission-create-h8jc7
	ccc563cc237ad       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5            3 minutes ago       Running             gadget                    0                   29bec77f11c04       gadget-wfgd6
	98d9d9e6f7cff       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               3 minutes ago       Running             minikube-ingress-dns      0                   4ec7e33290fa9       kube-ingress-dns-minikube
	efdccfb7c5871       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     4 minutes ago       Running             amd-gpu-device-plugin     0                   b303add445c01       amd-gpu-device-plugin-7vf6w
	504c4a05c990c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   dbf3eb2b087be       storage-provisioner
	09e757f8fa5ec       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             4 minutes ago       Running             coredns                   0                   9ff3349d533e3       coredns-66bc5c9577-6nz2f
	7c624489d2621       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                                             4 minutes ago       Running             kube-proxy                0                   e92d1607c2816       kube-proxy-hjc8c
	9c3b967f14d00       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                                             4 minutes ago       Running             kube-controller-manager   0                   4b0506489bc7b       kube-controller-manager-addons-266998
	cf91fd6c2612d       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                             4 minutes ago       Running             etcd                      0                   f338b68106088       etcd-addons-266998
	2638ce69f419d       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                                             4 minutes ago       Running             kube-scheduler            0                   212cf44bf1330       kube-scheduler-addons-266998
	5ce14efb751fc       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                                             4 minutes ago       Running             kube-apiserver            0                   7c8c81ccddb55       kube-apiserver-addons-266998
	
	
	==> coredns [09e757f8fa5ecd8d41c49111638a56c414bf4bb8de16990f5343a6f1d1f6023b] <==
	[INFO] 10.244.0.8:49044 - 26323 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000764518s
	[INFO] 10.244.0.8:49044 - 2889 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000084777s
	[INFO] 10.244.0.8:49044 - 42674 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000110969s
	[INFO] 10.244.0.8:49044 - 22506 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000114554s
	[INFO] 10.244.0.8:49044 - 59314 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000071845s
	[INFO] 10.244.0.8:49044 - 43231 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000299534s
	[INFO] 10.244.0.8:49044 - 40900 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000893488s
	[INFO] 10.244.0.8:56993 - 36949 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000158956s
	[INFO] 10.244.0.8:56993 - 37229 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000987277s
	[INFO] 10.244.0.8:50500 - 51428 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000094583s
	[INFO] 10.244.0.8:50500 - 51165 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000145015s
	[INFO] 10.244.0.8:54345 - 8463 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000162569s
	[INFO] 10.244.0.8:54345 - 8195 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00020046s
	[INFO] 10.244.0.8:48854 - 29561 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000107325s
	[INFO] 10.244.0.8:48854 - 29383 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000398365s
	[INFO] 10.244.0.23:40214 - 19213 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00078551s
	[INFO] 10.244.0.23:49412 - 52390 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000125997s
	[INFO] 10.244.0.23:37574 - 64371 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000092417s
	[INFO] 10.244.0.23:43929 - 38848 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000186206s
	[INFO] 10.244.0.23:39020 - 22221 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000125859s
	[INFO] 10.244.0.23:47067 - 29812 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000297727s
	[INFO] 10.244.0.23:35704 - 7030 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.004320416s
	[INFO] 10.244.0.23:51097 - 56472 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.004503312s
	[INFO] 10.244.0.26:49974 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000492803s
	[INFO] 10.244.0.26:34583 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000102665s
	
	
	==> describe nodes <==
	Name:               addons-266998
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-266998
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=addons-266998
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_19T22_15_00_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-266998
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:14:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-266998
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:19:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:17:34 +0000   Fri, 19 Sep 2025 22:14:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:17:34 +0000   Fri, 19 Sep 2025 22:14:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:17:34 +0000   Fri, 19 Sep 2025 22:14:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:17:34 +0000   Fri, 19 Sep 2025 22:15:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.205
	  Hostname:    addons-266998
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008596Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008596Ki
	  pods:               110
	System Info:
	  Machine ID:                 b9d3364d6c8e40e4bfa6b562fb833eaa
	  System UUID:                b9d3364d-6c8e-40e4-bfa6-b562fb833eaa
	  Boot ID:                    7443d744-a0d6-467e-8462-45d44a981dd9
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m5s
	  default                     hello-world-app-5d498dc89-pwr77             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m28s
	  gadget                      gadget-wfgd6                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m28s
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-nnd8q    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m27s
	  kube-system                 amd-gpu-device-plugin-7vf6w                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m33s
	  kube-system                 coredns-66bc5c9577-6nz2f                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m36s
	  kube-system                 etcd-addons-266998                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m41s
	  kube-system                 kube-apiserver-addons-266998                250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m42s
	  kube-system                 kube-controller-manager-addons-266998       200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m41s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m30s
	  kube-system                 kube-proxy-hjc8c                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m37s
	  kube-system                 kube-scheduler-addons-266998                100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m41s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m34s  kube-proxy       
	  Normal  Starting                 4m42s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m41s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m41s  kubelet          Node addons-266998 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m41s  kubelet          Node addons-266998 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m41s  kubelet          Node addons-266998 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m40s  kubelet          Node addons-266998 status is now: NodeReady
	  Normal  RegisteredNode           4m37s  node-controller  Node addons-266998 event: Registered Node addons-266998 in Controller
	
	
	==> dmesg <==
	[  +1.119323] kauditd_printk_skb: 294 callbacks suppressed
	[  +0.363458] kauditd_printk_skb: 273 callbacks suppressed
	[  +0.000240] kauditd_printk_skb: 392 callbacks suppressed
	[ +13.056652] kauditd_printk_skb: 55 callbacks suppressed
	[  +5.198181] kauditd_printk_skb: 20 callbacks suppressed
	[  +4.651383] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.768883] kauditd_printk_skb: 38 callbacks suppressed
	[Sep19 22:16] kauditd_printk_skb: 26 callbacks suppressed
	[  +0.945150] kauditd_printk_skb: 119 callbacks suppressed
	[  +4.724887] kauditd_printk_skb: 67 callbacks suppressed
	[  +0.000078] kauditd_printk_skb: 129 callbacks suppressed
	[  +5.200136] kauditd_printk_skb: 41 callbacks suppressed
	[  +4.029012] kauditd_printk_skb: 32 callbacks suppressed
	[  +3.122911] kauditd_printk_skb: 47 callbacks suppressed
	[  +6.954085] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.951149] kauditd_printk_skb: 22 callbacks suppressed
	[Sep19 22:17] kauditd_printk_skb: 38 callbacks suppressed
	[  +1.444622] kauditd_printk_skb: 150 callbacks suppressed
	[  +2.835564] kauditd_printk_skb: 71 callbacks suppressed
	[  +0.000126] kauditd_printk_skb: 93 callbacks suppressed
	[  +0.000032] kauditd_printk_skb: 134 callbacks suppressed
	[  +0.000063] kauditd_printk_skb: 112 callbacks suppressed
	[  +6.844248] kauditd_printk_skb: 61 callbacks suppressed
	[  +1.233478] kauditd_printk_skb: 125 callbacks suppressed
	[Sep19 22:19] kauditd_printk_skb: 10 callbacks suppressed
	
	
	==> etcd [cf91fd6c2612dea5041a6513c568a17ba63f9c604ff832ceee8cf3fd6ae5b73b] <==
	{"level":"info","ts":"2025-09-19T22:16:02.166667Z","caller":"traceutil/trace.go:172","msg":"trace[560130643] transaction","detail":"{read_only:false; response_revision:989; number_of_response:1; }","duration":"111.713151ms","start":"2025-09-19T22:16:02.054935Z","end":"2025-09-19T22:16:02.166648Z","steps":["trace[560130643] 'process raft request'  (duration: 111.549314ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-19T22:16:03.359304Z","caller":"traceutil/trace.go:172","msg":"trace[2100699408] transaction","detail":"{read_only:false; response_revision:991; number_of_response:1; }","duration":"116.832321ms","start":"2025-09-19T22:16:03.242460Z","end":"2025-09-19T22:16:03.359292Z","steps":["trace[2100699408] 'process raft request'  (duration: 116.734349ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-19T22:16:09.516434Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"237.737707ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-19T22:16:09.516493Z","caller":"traceutil/trace.go:172","msg":"trace[1022369828] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1022; }","duration":"237.87796ms","start":"2025-09-19T22:16:09.278604Z","end":"2025-09-19T22:16:09.516482Z","steps":["trace[1022369828] 'range keys from in-memory index tree'  (duration: 237.687263ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-19T22:16:19.687297Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"406.858559ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-19T22:16:19.687455Z","caller":"traceutil/trace.go:172","msg":"trace[2038033448] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1110; }","duration":"407.028046ms","start":"2025-09-19T22:16:19.280402Z","end":"2025-09-19T22:16:19.687430Z","steps":["trace[2038033448] 'range keys from in-memory index tree'  (duration: 406.789347ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-19T22:16:19.687506Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-19T22:16:19.280385Z","time spent":"407.108499ms","remote":"127.0.0.1:56222","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2025-09-19T22:16:26.828066Z","caller":"traceutil/trace.go:172","msg":"trace[644369439] linearizableReadLoop","detail":"{readStateIndex:1162; appliedIndex:1162; }","duration":"159.374385ms","start":"2025-09-19T22:16:26.668660Z","end":"2025-09-19T22:16:26.828035Z","steps":["trace[644369439] 'read index received'  (duration: 159.366053ms)","trace[644369439] 'applied index is now lower than readState.Index'  (duration: 7.577µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-19T22:16:26.828221Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"159.569172ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-19T22:16:26.828252Z","caller":"traceutil/trace.go:172","msg":"trace[229246200] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1130; }","duration":"159.61118ms","start":"2025-09-19T22:16:26.668631Z","end":"2025-09-19T22:16:26.828242Z","steps":["trace[229246200] 'agreement among raft nodes before linearized reading'  (duration: 159.542207ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-19T22:16:26.829309Z","caller":"traceutil/trace.go:172","msg":"trace[979441919] transaction","detail":"{read_only:false; response_revision:1131; number_of_response:1; }","duration":"242.628146ms","start":"2025-09-19T22:16:26.586670Z","end":"2025-09-19T22:16:26.829298Z","steps":["trace[979441919] 'process raft request'  (duration: 241.780194ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-19T22:16:30.646377Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"286.16825ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllers\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-19T22:16:30.646656Z","caller":"traceutil/trace.go:172","msg":"trace[159160209] range","detail":"{range_begin:/registry/controllers; range_end:; response_count:0; response_revision:1147; }","duration":"286.472004ms","start":"2025-09-19T22:16:30.360169Z","end":"2025-09-19T22:16:30.646641Z","steps":["trace[159160209] 'range keys from in-memory index tree'  (duration: 286.063779ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-19T22:16:30.646758Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"169.051258ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-19T22:16:30.646848Z","caller":"traceutil/trace.go:172","msg":"trace[227495442] range","detail":"{range_begin:/registry/leases/ingress-nginx/ingress-nginx-leader; range_end:; response_count:0; response_revision:1147; }","duration":"169.123527ms","start":"2025-09-19T22:16:30.477689Z","end":"2025-09-19T22:16:30.646813Z","steps":["trace[227495442] 'range keys from in-memory index tree'  (duration: 169.000779ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-19T22:16:30.649632Z","caller":"traceutil/trace.go:172","msg":"trace[1586605308] transaction","detail":"{read_only:false; response_revision:1148; number_of_response:1; }","duration":"109.889497ms","start":"2025-09-19T22:16:30.539201Z","end":"2025-09-19T22:16:30.649091Z","steps":["trace[1586605308] 'process raft request'  (duration: 105.907216ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-19T22:17:00.160039Z","caller":"traceutil/trace.go:172","msg":"trace[1766791941] transaction","detail":"{read_only:false; response_revision:1321; number_of_response:1; }","duration":"131.27045ms","start":"2025-09-19T22:17:00.028740Z","end":"2025-09-19T22:17:00.160010Z","steps":["trace[1766791941] 'process raft request'  (duration: 130.472272ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-19T22:17:09.235561Z","caller":"traceutil/trace.go:172","msg":"trace[1375266926] transaction","detail":"{read_only:false; response_revision:1419; number_of_response:1; }","duration":"109.485524ms","start":"2025-09-19T22:17:09.126001Z","end":"2025-09-19T22:17:09.235486Z","steps":["trace[1375266926] 'process raft request'  (duration: 107.753195ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-19T22:17:10.636776Z","caller":"traceutil/trace.go:172","msg":"trace[1542707474] linearizableReadLoop","detail":"{readStateIndex:1479; appliedIndex:1479; }","duration":"117.652185ms","start":"2025-09-19T22:17:10.519097Z","end":"2025-09-19T22:17:10.636750Z","steps":["trace[1542707474] 'read index received'  (duration: 117.640813ms)","trace[1542707474] 'applied index is now lower than readState.Index'  (duration: 8.19µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-19T22:17:10.637011Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"117.900831ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-19T22:17:10.637037Z","caller":"traceutil/trace.go:172","msg":"trace[2049381544] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1433; }","duration":"117.941394ms","start":"2025-09-19T22:17:10.519089Z","end":"2025-09-19T22:17:10.637031Z","steps":["trace[2049381544] 'agreement among raft nodes before linearized reading'  (duration: 117.757316ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-19T22:17:10.638623Z","caller":"traceutil/trace.go:172","msg":"trace[845538492] transaction","detail":"{read_only:false; response_revision:1434; number_of_response:1; }","duration":"348.474734ms","start":"2025-09-19T22:17:10.290136Z","end":"2025-09-19T22:17:10.638611Z","steps":["trace[845538492] 'process raft request'  (duration: 348.307174ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-19T22:17:10.638760Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-19T22:17:10.290114Z","time spent":"348.549649ms","remote":"127.0.0.1:41128","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":484,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/snapshot-controller-leader\" mod_revision:1383 > success:<request_put:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" value_size:421 >> failure:<request_range:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" > >"}
	{"level":"warn","ts":"2025-09-19T22:17:14.055955Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"221.658721ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-19T22:17:14.056026Z","caller":"traceutil/trace.go:172","msg":"trace[509625184] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1484; }","duration":"221.736929ms","start":"2025-09-19T22:17:13.834278Z","end":"2025-09-19T22:17:14.056015Z","steps":["trace[509625184] 'range keys from in-memory index tree'  (duration: 221.605679ms)"],"step_count":1}
	
	
	==> kernel <==
	 22:19:41 up 5 min,  0 users,  load average: 1.14, 1.82, 0.93
	Linux addons-266998 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Sep  9 02:24:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [5ce14efb751fc201ba06b34edfe39c446e22a33b3072fd8c74834811cdc40113] <==
	I0919 22:16:18.207207       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0919 22:16:44.774667       1 conn.go:339] Error on socket receive: read tcp 192.168.39.205:8443->192.168.39.1:44838: use of closed network connection
	E0919 22:16:44.974312       1 conn.go:339] Error on socket receive: read tcp 192.168.39.205:8443->192.168.39.1:44876: use of closed network connection
	I0919 22:16:54.272436       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.111.20.16"}
	I0919 22:17:13.199761       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I0919 22:17:13.395261       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.109.86.36"}
	I0919 22:17:19.897157       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0919 22:17:25.674099       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:17:38.549580       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0919 22:17:38.549654       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0919 22:17:38.576501       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0919 22:17:38.576557       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0919 22:17:38.618294       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0919 22:17:38.618417       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0919 22:17:38.735693       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0919 22:17:38.735745       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0919 22:17:39.577020       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0919 22:17:39.735783       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0919 22:17:39.848045       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E0919 22:17:42.763012       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0919 22:17:44.086181       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:17:49.596323       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0919 22:18:30.802299       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:18:55.922404       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:19:40.027968       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.104.60.145"}
	
	
	==> kube-controller-manager [9c3b967f14d009ae807b8d8ba6a8a2ae66c50c92e471081fcaa206c35f811f96] <==
	E0919 22:17:58.102404       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0919 22:17:58.625113       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0919 22:17:58.626219       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0919 22:18:00.868217       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0919 22:18:00.869324       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	I0919 22:18:04.428583       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I0919 22:18:04.428642       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0919 22:18:04.473793       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I0919 22:18:04.473944       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0919 22:18:13.446955       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0919 22:18:13.447994       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0919 22:18:17.913211       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0919 22:18:17.914290       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0919 22:18:23.028088       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0919 22:18:23.029385       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0919 22:18:45.785742       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0919 22:18:45.786971       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0919 22:18:53.011659       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0919 22:18:53.012702       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0919 22:19:05.090043       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0919 22:19:05.090987       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0919 22:19:16.512629       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0919 22:19:16.513784       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0919 22:19:31.353231       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0919 22:19:31.354472       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [7c624489d26219ce761f4d2dd6908c6fb9ee5ac73f8f569a9407fd12737ae160] <==
	I0919 22:15:06.447621       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0919 22:15:06.479919       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0919 22:15:06.480178       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.205"]
	E0919 22:15:06.484371       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 22:15:06.641623       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0919 22:15:06.641809       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0919 22:15:06.641841       1 server_linux.go:132] "Using iptables Proxier"
	I0919 22:15:06.702823       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 22:15:06.703680       1 server.go:527] "Version info" version="v1.34.0"
	I0919 22:15:06.707032       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 22:15:06.725289       1 config.go:200] "Starting service config controller"
	I0919 22:15:06.726947       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0919 22:15:06.727005       1 config.go:106] "Starting endpoint slice config controller"
	I0919 22:15:06.727010       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0919 22:15:06.727021       1 config.go:403] "Starting serviceCIDR config controller"
	I0919 22:15:06.727025       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0919 22:15:06.727750       1 config.go:309] "Starting node config controller"
	I0919 22:15:06.727756       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0919 22:15:06.727761       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0919 22:15:06.849209       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0919 22:15:06.859009       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0919 22:15:06.859042       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [2638ce69f419d16cbf76cfb7aff1bc5b0e589818c17a637356fad4c803b8f036] <==
	E0919 22:14:57.388959       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0919 22:14:57.389106       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0919 22:14:57.389559       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0919 22:14:57.390055       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0919 22:14:57.390292       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0919 22:14:57.390950       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0919 22:14:57.391207       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0919 22:14:57.391372       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0919 22:14:57.391451       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0919 22:14:57.391656       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0919 22:14:57.391792       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0919 22:14:57.391952       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0919 22:14:57.391967       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0919 22:14:58.275316       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0919 22:14:58.287173       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0919 22:14:58.315063       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0919 22:14:58.349391       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0919 22:14:58.404064       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0919 22:14:58.465012       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0919 22:14:58.480048       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0919 22:14:58.483701       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0919 22:14:58.659568       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0919 22:14:58.684697       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0919 22:14:58.722638       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I0919 22:15:00.378555       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 19 22:18:02 addons-266998 kubelet[1514]: I0919 22:18:02.534657    1514 scope.go:117] "RemoveContainer" containerID="69888ebb479157dba02910f153ae1e175eac2329b4c912a115236ee66dfae885"
	Sep 19 22:18:05 addons-266998 kubelet[1514]: I0919 22:18:04.999705    1514 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Sep 19 22:18:10 addons-266998 kubelet[1514]: E0919 22:18:10.386525    1514 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758320290386145189  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 19 22:18:10 addons-266998 kubelet[1514]: E0919 22:18:10.386550    1514 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758320290386145189  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 19 22:18:20 addons-266998 kubelet[1514]: E0919 22:18:20.390313    1514 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758320300389691790  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 19 22:18:20 addons-266998 kubelet[1514]: E0919 22:18:20.390365    1514 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758320300389691790  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 19 22:18:30 addons-266998 kubelet[1514]: E0919 22:18:30.393800    1514 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758320310393078937  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 19 22:18:30 addons-266998 kubelet[1514]: E0919 22:18:30.393849    1514 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758320310393078937  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 19 22:18:40 addons-266998 kubelet[1514]: E0919 22:18:40.397441    1514 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758320320396748478  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 19 22:18:40 addons-266998 kubelet[1514]: E0919 22:18:40.397478    1514 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758320320396748478  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 19 22:18:50 addons-266998 kubelet[1514]: E0919 22:18:50.401123    1514 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758320330400546141  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 19 22:18:50 addons-266998 kubelet[1514]: E0919 22:18:50.401152    1514 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758320330400546141  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 19 22:19:00 addons-266998 kubelet[1514]: E0919 22:19:00.404548    1514 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758320340404072015  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 19 22:19:00 addons-266998 kubelet[1514]: E0919 22:19:00.404593    1514 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758320340404072015  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 19 22:19:07 addons-266998 kubelet[1514]: I0919 22:19:07.000209    1514 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-7vf6w" secret="" err="secret \"gcp-auth\" not found"
	Sep 19 22:19:10 addons-266998 kubelet[1514]: E0919 22:19:10.407796    1514 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758320350407377308  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 19 22:19:10 addons-266998 kubelet[1514]: E0919 22:19:10.407842    1514 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758320350407377308  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 19 22:19:20 addons-266998 kubelet[1514]: E0919 22:19:20.411197    1514 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758320360410690866  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 19 22:19:20 addons-266998 kubelet[1514]: E0919 22:19:20.411243    1514 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758320360410690866  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 19 22:19:29 addons-266998 kubelet[1514]: I0919 22:19:29.000168    1514 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Sep 19 22:19:30 addons-266998 kubelet[1514]: E0919 22:19:30.414462    1514 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758320370413986014  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 19 22:19:30 addons-266998 kubelet[1514]: E0919 22:19:30.414487    1514 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758320370413986014  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 19 22:19:40 addons-266998 kubelet[1514]: I0919 22:19:40.069780    1514 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdddp\" (UniqueName: \"kubernetes.io/projected/353ebd0b-fb03-4bf3-822e-4be845b6af58-kube-api-access-tdddp\") pod \"hello-world-app-5d498dc89-pwr77\" (UID: \"353ebd0b-fb03-4bf3-822e-4be845b6af58\") " pod="default/hello-world-app-5d498dc89-pwr77"
	Sep 19 22:19:40 addons-266998 kubelet[1514]: E0919 22:19:40.424198    1514 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758320380423708388  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	Sep 19 22:19:40 addons-266998 kubelet[1514]: E0919 22:19:40.424436    1514 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758320380423708388  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:596879}  inodes_used:{value:201}}"
	
	
	==> storage-provisioner [504c4a05c990c550fbe4407197ea7b7e8abd1d1c5e578c648db202c14f54dc71] <==
	W0919 22:19:16.406043       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:19:18.409724       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:19:18.415417       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:19:20.418676       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:19:20.428619       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:19:22.432395       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:19:22.438678       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:19:24.441387       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:19:24.448531       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:19:26.452436       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:19:26.457765       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:19:28.461215       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:19:28.468685       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:19:30.472637       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:19:30.477770       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:19:32.481418       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:19:32.489436       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:19:34.493090       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:19:34.503644       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:19:36.507943       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:19:36.515536       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:19:38.518506       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:19:38.523690       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:19:40.530273       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:19:40.539832       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-266998 -n addons-266998
helpers_test.go:269: (dbg) Run:  kubectl --context addons-266998 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-pwr77 ingress-nginx-admission-create-h8jc7 ingress-nginx-admission-patch-5p8nh
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-266998 describe pod hello-world-app-5d498dc89-pwr77 ingress-nginx-admission-create-h8jc7 ingress-nginx-admission-patch-5p8nh
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-266998 describe pod hello-world-app-5d498dc89-pwr77 ingress-nginx-admission-create-h8jc7 ingress-nginx-admission-patch-5p8nh: exit status 1 (78.443708ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-pwr77
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-266998/192.168.39.205
	Start Time:       Fri, 19 Sep 2025 22:19:39 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tdddp (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-tdddp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-pwr77 to addons-266998
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-h8jc7" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-5p8nh" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-266998 describe pod hello-world-app-5d498dc89-pwr77 ingress-nginx-admission-create-h8jc7 ingress-nginx-admission-patch-5p8nh: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-266998 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-266998 addons disable ingress-dns --alsologtostderr -v=1: (1.254020059s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-266998 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-266998 addons disable ingress --alsologtostderr -v=1: (7.856577442s)
--- FAIL: TestAddons/parallel/Ingress (158.78s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (302.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-351278 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-351278 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-351278 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-351278 --alsologtostderr -v=1] stderr:
I0919 22:32:50.573978   29018 out.go:360] Setting OutFile to fd 1 ...
I0919 22:32:50.574111   29018 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0919 22:32:50.574119   29018 out.go:374] Setting ErrFile to fd 2...
I0919 22:32:50.574123   29018 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0919 22:32:50.574328   29018 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14764/.minikube/bin
I0919 22:32:50.574565   29018 mustload.go:65] Loading cluster: functional-351278
I0919 22:32:50.574963   29018 config.go:182] Loaded profile config "functional-351278": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0919 22:32:50.575306   29018 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0919 22:32:50.575363   29018 main.go:141] libmachine: Launching plugin server for driver kvm2
I0919 22:32:50.588896   29018 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36845
I0919 22:32:50.589530   29018 main.go:141] libmachine: () Calling .GetVersion
I0919 22:32:50.590159   29018 main.go:141] libmachine: Using API Version  1
I0919 22:32:50.590179   29018 main.go:141] libmachine: () Calling .SetConfigRaw
I0919 22:32:50.590532   29018 main.go:141] libmachine: () Calling .GetMachineName
I0919 22:32:50.590753   29018 main.go:141] libmachine: (functional-351278) Calling .GetState
I0919 22:32:50.592683   29018 host.go:66] Checking if "functional-351278" exists ...
I0919 22:32:50.592982   29018 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0919 22:32:50.593019   29018 main.go:141] libmachine: Launching plugin server for driver kvm2
I0919 22:32:50.606525   29018 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44073
I0919 22:32:50.606968   29018 main.go:141] libmachine: () Calling .GetVersion
I0919 22:32:50.607414   29018 main.go:141] libmachine: Using API Version  1
I0919 22:32:50.607428   29018 main.go:141] libmachine: () Calling .SetConfigRaw
I0919 22:32:50.607834   29018 main.go:141] libmachine: () Calling .GetMachineName
I0919 22:32:50.608047   29018 main.go:141] libmachine: (functional-351278) Calling .DriverName
I0919 22:32:50.608192   29018 api_server.go:166] Checking apiserver status ...
I0919 22:32:50.608249   29018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0919 22:32:50.608274   29018 main.go:141] libmachine: (functional-351278) Calling .GetSSHHostname
I0919 22:32:50.611551   29018 main.go:141] libmachine: (functional-351278) DBG | domain functional-351278 has defined MAC address 52:54:00:10:7c:f3 in network mk-functional-351278
I0919 22:32:50.612049   29018 main.go:141] libmachine: (functional-351278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:7c:f3", ip: ""} in network mk-functional-351278: {Iface:virbr1 ExpiryTime:2025-09-19 23:22:28 +0000 UTC Type:0 Mac:52:54:00:10:7c:f3 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:functional-351278 Clientid:01:52:54:00:10:7c:f3}
I0919 22:32:50.612082   29018 main.go:141] libmachine: (functional-351278) DBG | domain functional-351278 has defined IP address 192.168.39.95 and MAC address 52:54:00:10:7c:f3 in network mk-functional-351278
I0919 22:32:50.612270   29018 main.go:141] libmachine: (functional-351278) Calling .GetSSHPort
I0919 22:32:50.612457   29018 main.go:141] libmachine: (functional-351278) Calling .GetSSHKeyPath
I0919 22:32:50.612618   29018 main.go:141] libmachine: (functional-351278) Calling .GetSSHUsername
I0919 22:32:50.612748   29018 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21594-14764/.minikube/machines/functional-351278/id_rsa Username:docker}
I0919 22:32:50.708083   29018 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/6314/cgroup
W0919 22:32:50.724778   29018 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/6314/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I0919 22:32:50.724857   29018 ssh_runner.go:195] Run: ls
I0919 22:32:50.731288   29018 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
I0919 22:32:50.736183   29018 api_server.go:279] https://192.168.39.95:8441/healthz returned 200:
ok
W0919 22:32:50.736229   29018 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I0919 22:32:50.736415   29018 config.go:182] Loaded profile config "functional-351278": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0919 22:32:50.736440   29018 addons.go:69] Setting dashboard=true in profile "functional-351278"
I0919 22:32:50.736449   29018 addons.go:238] Setting addon dashboard=true in "functional-351278"
I0919 22:32:50.736486   29018 host.go:66] Checking if "functional-351278" exists ...
I0919 22:32:50.736883   29018 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0919 22:32:50.736934   29018 main.go:141] libmachine: Launching plugin server for driver kvm2
I0919 22:32:50.750480   29018 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34623
I0919 22:32:50.750922   29018 main.go:141] libmachine: () Calling .GetVersion
I0919 22:32:50.751376   29018 main.go:141] libmachine: Using API Version  1
I0919 22:32:50.751392   29018 main.go:141] libmachine: () Calling .SetConfigRaw
I0919 22:32:50.751763   29018 main.go:141] libmachine: () Calling .GetMachineName
I0919 22:32:50.752235   29018 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0919 22:32:50.752300   29018 main.go:141] libmachine: Launching plugin server for driver kvm2
I0919 22:32:50.766655   29018 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43523
I0919 22:32:50.767197   29018 main.go:141] libmachine: () Calling .GetVersion
I0919 22:32:50.767683   29018 main.go:141] libmachine: Using API Version  1
I0919 22:32:50.767702   29018 main.go:141] libmachine: () Calling .SetConfigRaw
I0919 22:32:50.768049   29018 main.go:141] libmachine: () Calling .GetMachineName
I0919 22:32:50.768216   29018 main.go:141] libmachine: (functional-351278) Calling .GetState
I0919 22:32:50.770123   29018 main.go:141] libmachine: (functional-351278) Calling .DriverName
I0919 22:32:50.772063   29018 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0919 22:32:50.773258   29018 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I0919 22:32:50.774341   29018 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0919 22:32:50.774354   29018 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0919 22:32:50.774373   29018 main.go:141] libmachine: (functional-351278) Calling .GetSSHHostname
I0919 22:32:50.777850   29018 main.go:141] libmachine: (functional-351278) DBG | domain functional-351278 has defined MAC address 52:54:00:10:7c:f3 in network mk-functional-351278
I0919 22:32:50.778322   29018 main.go:141] libmachine: (functional-351278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:7c:f3", ip: ""} in network mk-functional-351278: {Iface:virbr1 ExpiryTime:2025-09-19 23:22:28 +0000 UTC Type:0 Mac:52:54:00:10:7c:f3 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:functional-351278 Clientid:01:52:54:00:10:7c:f3}
I0919 22:32:50.778362   29018 main.go:141] libmachine: (functional-351278) DBG | domain functional-351278 has defined IP address 192.168.39.95 and MAC address 52:54:00:10:7c:f3 in network mk-functional-351278
I0919 22:32:50.778583   29018 main.go:141] libmachine: (functional-351278) Calling .GetSSHPort
I0919 22:32:50.778819   29018 main.go:141] libmachine: (functional-351278) Calling .GetSSHKeyPath
I0919 22:32:50.778973   29018 main.go:141] libmachine: (functional-351278) Calling .GetSSHUsername
I0919 22:32:50.779129   29018 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21594-14764/.minikube/machines/functional-351278/id_rsa Username:docker}
I0919 22:32:50.876565   29018 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0919 22:32:50.876595   29018 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0919 22:32:50.900074   29018 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0919 22:32:50.900100   29018 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0919 22:32:50.924443   29018 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0919 22:32:50.924464   29018 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0919 22:32:50.947997   29018 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0919 22:32:50.948017   29018 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I0919 22:32:50.973635   29018 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0919 22:32:50.973655   29018 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0919 22:32:50.996382   29018 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0919 22:32:50.996406   29018 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0919 22:32:51.019999   29018 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0919 22:32:51.020022   29018 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0919 22:32:51.044665   29018 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0919 22:32:51.044689   29018 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0919 22:32:51.068122   29018 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0919 22:32:51.068144   29018 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0919 22:32:51.091454   29018 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0919 22:32:51.818875   29018 main.go:141] libmachine: Making call to close driver server
I0919 22:32:51.818902   29018 main.go:141] libmachine: (functional-351278) Calling .Close
I0919 22:32:51.819237   29018 main.go:141] libmachine: Successfully made call to close driver server
I0919 22:32:51.819264   29018 main.go:141] libmachine: Making call to close connection to plugin binary
I0919 22:32:51.819279   29018 main.go:141] libmachine: Making call to close driver server
I0919 22:32:51.819287   29018 main.go:141] libmachine: (functional-351278) Calling .Close
I0919 22:32:51.819509   29018 main.go:141] libmachine: Successfully made call to close driver server
I0919 22:32:51.819530   29018 main.go:141] libmachine: (functional-351278) DBG | Closing plugin on server side
I0919 22:32:51.819534   29018 main.go:141] libmachine: Making call to close connection to plugin binary
I0919 22:32:51.821022   29018 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-351278 addons enable metrics-server

                                                
                                                
I0919 22:32:51.822474   29018 addons.go:201] Writing out "functional-351278" config to set dashboard=true...
W0919 22:32:51.822850   29018 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I0919 22:32:51.823820   29018 kapi.go:59] client config for functional-351278: &rest.Config{Host:"https://192.168.39.95:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-14764/.minikube/profiles/functional-351278/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-14764/.minikube/profiles/functional-351278/client.key", CAFile:"/home/jenkins/minikube-integration/21594-14764/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0919 22:32:51.824449   29018 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I0919 22:32:51.824471   29018 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I0919 22:32:51.824478   29018 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I0919 22:32:51.824484   29018 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I0919 22:32:51.824497   29018 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I0919 22:32:51.836892   29018 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  baf4cbb5-6720-45f5-b275-e37ac32f4807 1249 0 2025-09-19 22:32:51 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-09-19 22:32:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.103.83.102,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.103.83.102],IPFamilies:[IPv4],AllocateLoadBalance
rNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W0919 22:32:51.837048   29018 out.go:285] * Launching proxy ...
* Launching proxy ...
I0919 22:32:51.837125   29018 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-351278 proxy --port 36195]
I0919 22:32:51.837425   29018 dashboard.go:157] Waiting for kubectl to output host:port ...
I0919 22:32:51.881851   29018 dashboard.go:175] proxy stdout: Starting to serve on 127.0.0.1:36195
W0919 22:32:51.881927   29018 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I0919 22:32:51.892188   29018 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1f4b553b-faee-4661-a387-c9cea4b21e4c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 19 Sep 2025 22:32:51 GMT]] Body:0xc00080b140 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00016b2c0 TLS:<nil>}
I0919 22:32:51.892251   29018 retry.go:31] will retry after 77.134µs: Temporary Error: unexpected response code: 503
I0919 22:32:51.895991   29018 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ff376538-b81b-4036-bf07-852d7a3d0a53] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 19 Sep 2025 22:32:51 GMT]] Body:0xc001630100 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000386a00 TLS:<nil>}
I0919 22:32:51.896047   29018 retry.go:31] will retry after 188.256µs: Temporary Error: unexpected response code: 503
I0919 22:32:51.900108   29018 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[aa3cef31-a0ac-47db-ac92-97edd0e82138] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 19 Sep 2025 22:32:51 GMT]] Body:0xc000622d40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00016b400 TLS:<nil>}
I0919 22:32:51.900170   29018 retry.go:31] will retry after 311.333µs: Temporary Error: unexpected response code: 503
I0919 22:32:51.904245   29018 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[927358c6-2889-4e76-88a6-ea01c5e124e0] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 19 Sep 2025 22:32:51 GMT]] Body:0xc000622e00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00156c000 TLS:<nil>}
I0919 22:32:51.904304   29018 retry.go:31] will retry after 328.352µs: Temporary Error: unexpected response code: 503
I0919 22:32:51.908629   29018 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f8c99146-9383-4cfe-8475-84418e27ec8f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 19 Sep 2025 22:32:51 GMT]] Body:0xc00080b240 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00156c140 TLS:<nil>}
I0919 22:32:51.908691   29018 retry.go:31] will retry after 605.251µs: Temporary Error: unexpected response code: 503
I0919 22:32:51.913584   29018 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[fdce1a50-f034-4fb4-8d6d-468b3839647e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 19 Sep 2025 22:32:51 GMT]] Body:0xc000622fc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000386b40 TLS:<nil>}
I0919 22:32:51.913680   29018 retry.go:31] will retry after 694.793µs: Temporary Error: unexpected response code: 503
I0919 22:32:51.917646   29018 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c237c074-7d50-493c-9da5-538dd93134b4] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 19 Sep 2025 22:32:51 GMT]] Body:0xc000623080 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00156c280 TLS:<nil>}
I0919 22:32:51.917694   29018 retry.go:31] will retry after 1.22913ms: Temporary Error: unexpected response code: 503
I0919 22:32:51.922993   29018 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d6800155-310e-4959-810c-8770b81bc94c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 19 Sep 2025 22:32:51 GMT]] Body:0xc00080b380 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00156c3c0 TLS:<nil>}
I0919 22:32:51.923044   29018 retry.go:31] will retry after 1.573165ms: Temporary Error: unexpected response code: 503
I0919 22:32:51.927980   29018 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[bb5696bb-1d19-490d-af2e-9543482a22f9] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 19 Sep 2025 22:32:51 GMT]] Body:0xc001630240 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000386c80 TLS:<nil>}
I0919 22:32:51.928032   29018 retry.go:31] will retry after 1.676603ms: Temporary Error: unexpected response code: 503
I0919 22:32:51.933500   29018 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5916a459-612d-42ed-a265-af2967580b3b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 19 Sep 2025 22:32:51 GMT]] Body:0xc00080b480 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00016b540 TLS:<nil>}
I0919 22:32:51.933560   29018 retry.go:31] will retry after 5.133731ms: Temporary Error: unexpected response code: 503
I0919 22:32:51.943209   29018 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[70ea883d-4087-4918-8877-469852de1871] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 19 Sep 2025 22:32:51 GMT]] Body:0xc0006231c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000386dc0 TLS:<nil>}
I0919 22:32:51.943259   29018 retry.go:31] will retry after 6.082858ms: Temporary Error: unexpected response code: 503
I0919 22:32:51.953948   29018 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[929e8b44-a8d7-4c39-9089-6aa7f2ddbaed] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 19 Sep 2025 22:32:51 GMT]] Body:0xc00080b540 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00156c500 TLS:<nil>}
I0919 22:32:51.953995   29018 retry.go:31] will retry after 11.723967ms: Temporary Error: unexpected response code: 503
I0919 22:32:51.973237   29018 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[93e5455e-4554-4603-b107-e0db37a6b76e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 19 Sep 2025 22:32:51 GMT]] Body:0xc000623400 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000386f00 TLS:<nil>}
I0919 22:32:51.973291   29018 retry.go:31] will retry after 12.427664ms: Temporary Error: unexpected response code: 503
I0919 22:32:51.991277   29018 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ec488ee9-c307-43f1-956d-0107e1fc7269] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 19 Sep 2025 22:32:51 GMT]] Body:0xc00080b640 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00156c640 TLS:<nil>}
I0919 22:32:51.991336   29018 retry.go:31] will retry after 23.608473ms: Temporary Error: unexpected response code: 503
I0919 22:32:52.021047   29018 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2bbefdce-55d7-48f9-99e5-3f689c92af58] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 19 Sep 2025 22:32:52 GMT]] Body:0xc000623600 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000387040 TLS:<nil>}
I0919 22:32:52.021112   29018 retry.go:31] will retry after 22.079976ms: Temporary Error: unexpected response code: 503
I0919 22:32:52.050363   29018 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c2947d1c-b75c-4b51-b686-211d597083bb] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 19 Sep 2025 22:32:52 GMT]] Body:0xc001630380 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00156c780 TLS:<nil>}
I0919 22:32:52.050413   29018 retry.go:31] will retry after 44.552054ms: Temporary Error: unexpected response code: 503
I0919 22:32:52.102257   29018 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b0d628f5-641d-45bf-82ec-f6933d14aa68] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 19 Sep 2025 22:32:52 GMT]] Body:0xc000623800 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00016b680 TLS:<nil>}
I0919 22:32:52.102331   29018 retry.go:31] will retry after 73.475492ms: Temporary Error: unexpected response code: 503
I0919 22:32:52.182594   29018 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[fcffaefe-dea9-435f-8935-3315e4c152d3] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 19 Sep 2025 22:32:52 GMT]] Body:0xc00080b700 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00156c8c0 TLS:<nil>}
I0919 22:32:52.182657   29018 retry.go:31] will retry after 136.03847ms: Temporary Error: unexpected response code: 503
I0919 22:32:52.322412   29018 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4a4bb877-fe8b-44ef-9181-e18b431159c0] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 19 Sep 2025 22:32:52 GMT]] Body:0xc0006239c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000387180 TLS:<nil>}
I0919 22:32:52.322489   29018 retry.go:31] will retry after 137.938174ms: Temporary Error: unexpected response code: 503
I0919 22:32:52.464042   29018 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5c531d37-ba94-429a-a8f8-7240ccbcd228] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 19 Sep 2025 22:32:52 GMT]] Body:0xc001630440 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00156ca00 TLS:<nil>}
I0919 22:32:52.464095   29018 retry.go:31] will retry after 240.859339ms: Temporary Error: unexpected response code: 503
I0919 22:32:52.709108   29018 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[da3d9fac-5f50-4a20-be2c-2b983ea26dc3] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 19 Sep 2025 22:32:52 GMT]] Body:0xc00080b800 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00016b7c0 TLS:<nil>}
I0919 22:32:52.709176   29018 retry.go:31] will retry after 268.459784ms: Temporary Error: unexpected response code: 503
I0919 22:32:52.982338   29018 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[680ae2a2-23b5-4a32-9df3-7f6f8fe859ec] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 19 Sep 2025 22:32:52 GMT]] Body:0xc001630500 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003872c0 TLS:<nil>}
I0919 22:32:52.982393   29018 retry.go:31] will retry after 740.329677ms: Temporary Error: unexpected response code: 503
I0919 22:32:53.726013   29018 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[aadeabc8-1f52-4fb7-a00e-39960af0db8b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 19 Sep 2025 22:32:53 GMT]] Body:0xc000623b80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00016b900 TLS:<nil>}
I0919 22:32:53.726094   29018 retry.go:31] will retry after 774.208246ms: Temporary Error: unexpected response code: 503
I0919 22:32:54.504804   29018 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6dbebaeb-89e5-4345-9884-490f6247b9ca] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 19 Sep 2025 22:32:54 GMT]] Body:0xc00080b900 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00156cb40 TLS:<nil>}
I0919 22:32:54.504878   29018 retry.go:31] will retry after 1.471808804s: Temporary Error: unexpected response code: 503
I0919 22:32:55.980754   29018 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[271687eb-3352-4bce-aa52-7461b8d7f1b3] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 19 Sep 2025 22:32:55 GMT]] Body:0xc000623d80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000387400 TLS:<nil>}
I0919 22:32:55.980820   29018 retry.go:31] will retry after 2.207236514s: Temporary Error: unexpected response code: 503
I0919 22:32:58.193225   29018 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7cab21ec-3b9b-4c44-900d-a9636d92c208] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 19 Sep 2025 22:32:58 GMT]] Body:0xc001630600 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00156cc80 TLS:<nil>}
I0919 22:32:58.193281   29018 retry.go:31] will retry after 2.735240897s: Temporary Error: unexpected response code: 503
I0919 22:33:00.932862   29018 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[656fd8c0-0db6-428b-9f4d-c09d61317366] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 19 Sep 2025 22:33:00 GMT]] Body:0xc0016b2040 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000387540 TLS:<nil>}
I0919 22:33:00.932920   29018 retry.go:31] will retry after 3.109639482s: Temporary Error: unexpected response code: 503
I0919 22:33:04.046269   29018 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d8ec1c51-5fbc-40e0-87dc-bfb259a494b8] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 19 Sep 2025 22:33:04 GMT]] Body:0xc00080ba80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00016be00 TLS:<nil>}
I0919 22:33:04.046333   29018 retry.go:31] will retry after 4.96891658s: Temporary Error: unexpected response code: 503
I0919 22:33:09.021934   29018 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8d408b82-6510-4e74-a89e-70ad32c4788a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 19 Sep 2025 22:33:09 GMT]] Body:0xc00080bb80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000387680 TLS:<nil>}
I0919 22:33:09.021987   29018 retry.go:31] will retry after 9.481591101s: Temporary Error: unexpected response code: 503
I0919 22:33:18.509789   29018 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[dd085f42-c30f-4379-bbbd-6d11cffa69bb] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 19 Sep 2025 22:33:18 GMT]] Body:0xc001630700 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003877c0 TLS:<nil>}
I0919 22:33:18.509848   29018 retry.go:31] will retry after 6.966990926s: Temporary Error: unexpected response code: 503
I0919 22:33:25.483922   29018 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[106cccf0-d519-436e-ad89-9f858362a51e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 19 Sep 2025 22:33:25 GMT]] Body:0xc001630780 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000387900 TLS:<nil>}
I0919 22:33:25.483977   29018 retry.go:31] will retry after 28.475422087s: Temporary Error: unexpected response code: 503
I0919 22:33:53.965485   29018 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6f8912d6-7d0e-49ea-bbb7-07287c763abc] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 19 Sep 2025 22:33:53 GMT]] Body:0xc0016b20c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00165e000 TLS:<nil>}
I0919 22:33:53.965564   29018 retry.go:31] will retry after 19.272518795s: Temporary Error: unexpected response code: 503
I0919 22:34:13.242266   29018 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5ff8c050-04cd-415b-b06e-de4bdb2324f2] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 19 Sep 2025 22:34:13 GMT]] Body:0xc00080bd40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00165e140 TLS:<nil>}
I0919 22:34:13.242322   29018 retry.go:31] will retry after 28.76726289s: Temporary Error: unexpected response code: 503
I0919 22:34:42.013708   29018 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7c7ff165-6158-4d34-8bfe-6a6ab1f14c0e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Fri, 19 Sep 2025 22:34:41 GMT]] Body:0xc00080bdc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00156cdc0 TLS:<nil>}
I0919 22:34:42.013831   29018 retry.go:31] will retry after 42.443679889s: Temporary Error: unexpected response code: 503
I0919 22:35:24.463513   29018 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[66e1a513-bb19-4ba5-97a5-ae7aeddf0ab8] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Fri, 19 Sep 2025 22:35:24 GMT]] Body:0xc0016b2040 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000386280 TLS:<nil>}
I0919 22:35:24.463585   29018 retry.go:31] will retry after 47.809515698s: Temporary Error: unexpected response code: 503
I0919 22:36:12.277373   29018 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4cbafd3e-7613-455a-aa6b-c174b17f8cfd] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Fri, 19 Sep 2025 22:36:12 GMT]] Body:0xc00080a400 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00156cf00 TLS:<nil>}
I0919 22:36:12.277470   29018 retry.go:31] will retry after 46.385689016s: Temporary Error: unexpected response code: 503
I0919 22:36:58.669536   29018 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[38278ae8-c224-4a91-a3dc-b87f32b676f8] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Fri, 19 Sep 2025 22:36:58 GMT]] Body:0xc00080a3c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00156d040 TLS:<nil>}
I0919 22:36:58.669612   29018 retry.go:31] will retry after 1m22.976101708s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-351278 -n functional-351278
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-351278 logs -n 25: (1.539919032s)
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                ARGS                                                                 │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount          │ -p functional-351278 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2795973708/001:/mount1 --alsologtostderr -v=1                  │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ mount          │ -p functional-351278 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2795973708/001:/mount3 --alsologtostderr -v=1                  │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ mount          │ -p functional-351278 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2795973708/001:/mount2 --alsologtostderr -v=1                  │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh            │ functional-351278 ssh findmnt -T /mount1                                                                                            │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ ssh            │ functional-351278 ssh findmnt -T /mount2                                                                                            │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ ssh            │ functional-351278 ssh findmnt -T /mount3                                                                                            │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ mount          │ -p functional-351278 --kill=true                                                                                                    │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ start          │ -p functional-351278 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ start          │ -p functional-351278 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false           │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-351278 --alsologtostderr -v=1                                                                      │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ update-context │ functional-351278 update-context --alsologtostderr -v=2                                                                             │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │ 19 Sep 25 22:35 UTC │
	│ update-context │ functional-351278 update-context --alsologtostderr -v=2                                                                             │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │ 19 Sep 25 22:35 UTC │
	│ update-context │ functional-351278 update-context --alsologtostderr -v=2                                                                             │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │ 19 Sep 25 22:35 UTC │
	│ image          │ functional-351278 image ls --format short --alsologtostderr                                                                         │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │ 19 Sep 25 22:35 UTC │
	│ image          │ functional-351278 image ls --format yaml --alsologtostderr                                                                          │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │ 19 Sep 25 22:35 UTC │
	│ ssh            │ functional-351278 ssh pgrep buildkitd                                                                                               │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │                     │
	│ image          │ functional-351278 image build -t localhost/my-image:functional-351278 testdata/build --alsologtostderr                              │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │ 19 Sep 25 22:35 UTC │
	│ image          │ functional-351278 image ls --format json --alsologtostderr                                                                          │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │ 19 Sep 25 22:35 UTC │
	│ image          │ functional-351278 image ls --format table --alsologtostderr                                                                         │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │ 19 Sep 25 22:35 UTC │
	│ image          │ functional-351278 image ls                                                                                                          │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │ 19 Sep 25 22:35 UTC │
	│ service        │ functional-351278 service list                                                                                                      │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │ 19 Sep 25 22:35 UTC │
	│ service        │ functional-351278 service list -o json                                                                                              │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │ 19 Sep 25 22:35 UTC │
	│ service        │ functional-351278 service --namespace=default --https --url hello-node                                                              │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │                     │
	│ service        │ functional-351278 service hello-node --url --format={{.IP}}                                                                         │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │                     │
	│ service        │ functional-351278 service hello-node --url                                                                                          │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/19 22:32:50
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 22:32:50.448994   28990 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:32:50.449267   28990 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:32:50.449277   28990 out.go:374] Setting ErrFile to fd 2...
	I0919 22:32:50.449281   28990 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:32:50.449466   28990 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14764/.minikube/bin
	I0919 22:32:50.449918   28990 out.go:368] Setting JSON to false
	I0919 22:32:50.450889   28990 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4497,"bootTime":1758316673,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 22:32:50.450979   28990 start.go:140] virtualization: kvm guest
	I0919 22:32:50.452926   28990 out.go:179] * [functional-351278] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0919 22:32:50.454152   28990 notify.go:220] Checking for updates...
	I0919 22:32:50.454178   28990 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 22:32:50.455249   28990 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 22:32:50.456277   28990 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-14764/kubeconfig
	I0919 22:32:50.457463   28990 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14764/.minikube
	I0919 22:32:50.458570   28990 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 22:32:50.459746   28990 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 22:32:50.461354   28990 config.go:182] Loaded profile config "functional-351278": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:32:50.461950   28990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 22:32:50.462017   28990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 22:32:50.475786   28990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43741
	I0919 22:32:50.476249   28990 main.go:141] libmachine: () Calling .GetVersion
	I0919 22:32:50.476777   28990 main.go:141] libmachine: Using API Version  1
	I0919 22:32:50.476805   28990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 22:32:50.477159   28990 main.go:141] libmachine: () Calling .GetMachineName
	I0919 22:32:50.477415   28990 main.go:141] libmachine: (functional-351278) Calling .DriverName
	I0919 22:32:50.477658   28990 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 22:32:50.478003   28990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 22:32:50.478045   28990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 22:32:50.492182   28990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36729
	I0919 22:32:50.492616   28990 main.go:141] libmachine: () Calling .GetVersion
	I0919 22:32:50.493058   28990 main.go:141] libmachine: Using API Version  1
	I0919 22:32:50.493081   28990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 22:32:50.493467   28990 main.go:141] libmachine: () Calling .GetMachineName
	I0919 22:32:50.493665   28990 main.go:141] libmachine: (functional-351278) Calling .DriverName
	I0919 22:32:50.526542   28990 out.go:179] * Using the kvm2 driver based on existing profile
	I0919 22:32:50.527765   28990 start.go:304] selected driver: kvm2
	I0919 22:32:50.527780   28990 start.go:918] validating driver "kvm2" against &{Name:functional-351278 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.37.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-351278 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.95 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:32:50.527887   28990 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 22:32:50.528840   28990 cni.go:84] Creating CNI manager for ""
	I0919 22:32:50.528908   28990 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0919 22:32:50.528959   28990 start.go:348] cluster config:
	{Name:functional-351278 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.37.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-351278 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.95 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:32:50.530330   28990 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Sep 19 22:37:51 functional-351278 crio[4941]: time="2025-09-19 22:37:51.393127475Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758321471393060512,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:201235,},InodesUsed:&UInt64Value{Value:103,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=59a8e2ae-19fc-4723-87fe-6833af1235bd name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 22:37:51 functional-351278 crio[4941]: time="2025-09-19 22:37:51.394220135Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d12f40b0-7cbb-4e5e-9a3d-4af78610c59a name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 22:37:51 functional-351278 crio[4941]: time="2025-09-19 22:37:51.394335352Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d12f40b0-7cbb-4e5e-9a3d-4af78610c59a name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 22:37:51 functional-351278 crio[4941]: time="2025-09-19 22:37:51.394655612Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5a787edfb8f9c734d3f5475054443381d7a2a79092acbf9a206f87afe5b0a0c5,PodSandboxId:06fdd7820d034f4ae92e5d2d96796018617b28fd316023a16e12d5852a4a5a3f,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1758321162451285058,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dc475075-c430-4387-b9b0-f728391024f1,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f830720c30812bcf9aa33ac20c632924b1c7120a65fd62b5dcfd937a7ae0e617,PodSandboxId:179e1cbc0b9cb7eb9b9f338b3f22ceeb08891407c365ac303f4dd9c6fed4143d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1758320675216064724,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd642869-ae38-45a0-ad01-05dcfff73601,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72cb2cb146fe5847510292057e63e40a96fd64e4f4a76d8696090e6a48ed4351,PodSandboxId:26fc8596bfe94da02bdc7e7484fa0c2ca0695b9450116772839e3a85a024b3ab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1758320675254838006,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gnc2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa84af28-0f0b-418e-9cd9-fa7851c365b3,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6483900c50ab2f70850376c9f661c10fc004d33319d8bf76f53dbc28e3259c8b,PodSandboxId:d7316350ed4bf51f55c971d0eed2f6c875ccfdd174aca0edfa500a9128027f8f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1758320670920170425,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb349a09d58d71c20ea5d2b4f09b994c,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protoc
ol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca9ceb7aad18b856b069a53fa36528acb133c0db8d9b22639ad02163b2a8d256,PodSandboxId:dfc27b667d8fb9502bb99e124254f314b5ede7a16da906e21922b48b9813d110,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1758320670593619725,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 820510e9b04676b04325640c25f4a8e8,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.port
s: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:300bb2063f5a43edc6035ff49f65d585ce99e18457f8ac56a6d29b175cf12e8e,PodSandboxId:50d153de402bbaf6578165f458cd05f74f00bba57e724d93d9ac9e169199de44,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1758320670610434339,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 12680b25168cedd6455f72c7610a1ddf,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:654bd01629fdeca3657986cd26bfffe3f8a55afa36789f47b07bc6d11d84a780,PodSandboxId:eb5586f63d27cbb5f06cca606652dc3863f5c2a62ae63f01b8466cdc4199c0f8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1758320670574957041,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kuber
netes.pod.name: kube-scheduler-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67dcfe94bb7274dbb92a0f8ed062f93a,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d062b037dd6fb189144a98d2de6f1c0d776f2e6505f52278691cb46e10a23d74,PodSandboxId:16b9b7066729ddc70087ac98b62dbfca1d172e0e6e3fbdbcd4b64c1188c97b60,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,Creat
edAt:1758320666084991194,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-kcks9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d98d841-9889-4f5a-b46d-4c15a211c7e9,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48123667739fa5c6e4b0d2d5215d09e671990614ccd0d3d4fc54cf1ff733e76f,PodSandboxId:dfc27b667d8fb9502bb99e124254f314b5ed
e7a16da906e21922b48b9813d110,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1758320665174114998,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 820510e9b04676b04325640c25f4a8e8,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18c2d509c5728590a77ee037c7f0994bafee3
5e8bf16b51258dee0762a0a53b1,PodSandboxId:179e1cbc0b9cb7eb9b9f338b3f22ceeb08891407c365ac303f4dd9c6fed4143d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1758320665022788748,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd642869-ae38-45a0-ad01-05dcfff73601,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb8f63dfd7cc4378b1a91b1e35eef735c40ccd151cff64e84d
64aa5de2121aae,PodSandboxId:26fc8596bfe94da02bdc7e7484fa0c2ca0695b9450116772839e3a85a024b3ab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1758320664876226298,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gnc2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa84af28-0f0b-418e-9cd9-fa7851c365b3,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a41d060c833a02a6404078af8ea2569c9f77f65721fd004704b5b3e80d50bd61,PodSandboxId:eb5586f
63d27cbb5f06cca606652dc3863f5c2a62ae63f01b8466cdc4199c0f8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1758320664849656479,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67dcfe94bb7274dbb92a0f8ed062f93a,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePerio
d: 30,},},&Container{Id:10c598550cc09fff82686c0933621cfc45b23d13c3546265f21a53cd4c27111f,PodSandboxId:50d153de402bbaf6578165f458cd05f74f00bba57e724d93d9ac9e169199de44,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1758320664820547401,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12680b25168cedd6455f72c7610a1ddf,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9732f6304c59d282334ca0d28d637dd10341a1c86fd79880df7b7519a377526e,PodSandboxId:6467ca6fcd998e43c87b7a92b68a257970d17335a3ee9c72c87257f3a6f6c0c5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1758320628073012264,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-kcks9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d98d841-9889-4f5a-b46d-4c15a211c7e9,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d12f40b0-7cbb-4e5e-9a3d-4af78610c59a name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 22:37:51 functional-351278 crio[4941]: time="2025-09-19 22:37:51.448070171Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=eed0829d-1cbe-43f2-8fd5-4892ef37b1fb name=/runtime.v1.RuntimeService/Version
	Sep 19 22:37:51 functional-351278 crio[4941]: time="2025-09-19 22:37:51.448141592Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=eed0829d-1cbe-43f2-8fd5-4892ef37b1fb name=/runtime.v1.RuntimeService/Version
	Sep 19 22:37:51 functional-351278 crio[4941]: time="2025-09-19 22:37:51.450322933Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9b8e8f77-cb17-4ca3-89ef-9be6ff03e5b2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 22:37:51 functional-351278 crio[4941]: time="2025-09-19 22:37:51.451220474Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758321471451185266,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:201235,},InodesUsed:&UInt64Value{Value:103,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9b8e8f77-cb17-4ca3-89ef-9be6ff03e5b2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 22:37:51 functional-351278 crio[4941]: time="2025-09-19 22:37:51.451845614Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=443231ba-8837-4e6e-9003-2b7c4f7bd10a name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 22:37:51 functional-351278 crio[4941]: time="2025-09-19 22:37:51.451977958Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=443231ba-8837-4e6e-9003-2b7c4f7bd10a name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 22:37:51 functional-351278 crio[4941]: time="2025-09-19 22:37:51.452253031Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5a787edfb8f9c734d3f5475054443381d7a2a79092acbf9a206f87afe5b0a0c5,PodSandboxId:06fdd7820d034f4ae92e5d2d96796018617b28fd316023a16e12d5852a4a5a3f,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1758321162451285058,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dc475075-c430-4387-b9b0-f728391024f1,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f830720c30812bcf9aa33ac20c632924b1c7120a65fd62b5dcfd937a7ae0e617,PodSandboxId:179e1cbc0b9cb7eb9b9f338b3f22ceeb08891407c365ac303f4dd9c6fed4143d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1758320675216064724,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd642869-ae38-45a0-ad01-05dcfff73601,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72cb2cb146fe5847510292057e63e40a96fd64e4f4a76d8696090e6a48ed4351,PodSandboxId:26fc8596bfe94da02bdc7e7484fa0c2ca0695b9450116772839e3a85a024b3ab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1758320675254838006,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gnc2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa84af28-0f0b-418e-9cd9-fa7851c365b3,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6483900c50ab2f70850376c9f661c10fc004d33319d8bf76f53dbc28e3259c8b,PodSandboxId:d7316350ed4bf51f55c971d0eed2f6c875ccfdd174aca0edfa500a9128027f8f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1758320670920170425,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb349a09d58d71c20ea5d2b4f09b994c,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protoc
ol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca9ceb7aad18b856b069a53fa36528acb133c0db8d9b22639ad02163b2a8d256,PodSandboxId:dfc27b667d8fb9502bb99e124254f314b5ede7a16da906e21922b48b9813d110,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1758320670593619725,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 820510e9b04676b04325640c25f4a8e8,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.port
s: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:300bb2063f5a43edc6035ff49f65d585ce99e18457f8ac56a6d29b175cf12e8e,PodSandboxId:50d153de402bbaf6578165f458cd05f74f00bba57e724d93d9ac9e169199de44,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1758320670610434339,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 12680b25168cedd6455f72c7610a1ddf,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:654bd01629fdeca3657986cd26bfffe3f8a55afa36789f47b07bc6d11d84a780,PodSandboxId:eb5586f63d27cbb5f06cca606652dc3863f5c2a62ae63f01b8466cdc4199c0f8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1758320670574957041,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kuber
netes.pod.name: kube-scheduler-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67dcfe94bb7274dbb92a0f8ed062f93a,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d062b037dd6fb189144a98d2de6f1c0d776f2e6505f52278691cb46e10a23d74,PodSandboxId:16b9b7066729ddc70087ac98b62dbfca1d172e0e6e3fbdbcd4b64c1188c97b60,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,Creat
edAt:1758320666084991194,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-kcks9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d98d841-9889-4f5a-b46d-4c15a211c7e9,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48123667739fa5c6e4b0d2d5215d09e671990614ccd0d3d4fc54cf1ff733e76f,PodSandboxId:dfc27b667d8fb9502bb99e124254f314b5ed
e7a16da906e21922b48b9813d110,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1758320665174114998,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 820510e9b04676b04325640c25f4a8e8,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18c2d509c5728590a77ee037c7f0994bafee3
5e8bf16b51258dee0762a0a53b1,PodSandboxId:179e1cbc0b9cb7eb9b9f338b3f22ceeb08891407c365ac303f4dd9c6fed4143d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1758320665022788748,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd642869-ae38-45a0-ad01-05dcfff73601,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb8f63dfd7cc4378b1a91b1e35eef735c40ccd151cff64e84d
64aa5de2121aae,PodSandboxId:26fc8596bfe94da02bdc7e7484fa0c2ca0695b9450116772839e3a85a024b3ab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1758320664876226298,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gnc2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa84af28-0f0b-418e-9cd9-fa7851c365b3,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a41d060c833a02a6404078af8ea2569c9f77f65721fd004704b5b3e80d50bd61,PodSandboxId:eb5586f
63d27cbb5f06cca606652dc3863f5c2a62ae63f01b8466cdc4199c0f8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1758320664849656479,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67dcfe94bb7274dbb92a0f8ed062f93a,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePerio
d: 30,},},&Container{Id:10c598550cc09fff82686c0933621cfc45b23d13c3546265f21a53cd4c27111f,PodSandboxId:50d153de402bbaf6578165f458cd05f74f00bba57e724d93d9ac9e169199de44,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1758320664820547401,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12680b25168cedd6455f72c7610a1ddf,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9732f6304c59d282334ca0d28d637dd10341a1c86fd79880df7b7519a377526e,PodSandboxId:6467ca6fcd998e43c87b7a92b68a257970d17335a3ee9c72c87257f3a6f6c0c5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1758320628073012264,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-kcks9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d98d841-9889-4f5a-b46d-4c15a211c7e9,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=443231ba-8837-4e6e-9003-2b7c4f7bd10a name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 22:37:51 functional-351278 crio[4941]: time="2025-09-19 22:37:51.490249258Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=adc39afc-e1f4-4c89-8299-e14861ccbc06 name=/runtime.v1.RuntimeService/Version
	Sep 19 22:37:51 functional-351278 crio[4941]: time="2025-09-19 22:37:51.490338223Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=adc39afc-e1f4-4c89-8299-e14861ccbc06 name=/runtime.v1.RuntimeService/Version
	Sep 19 22:37:51 functional-351278 crio[4941]: time="2025-09-19 22:37:51.491674987Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bebbc30f-0e7f-42d5-b9bb-6ef72e187e3a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 22:37:51 functional-351278 crio[4941]: time="2025-09-19 22:37:51.493177276Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758321471493149449,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:201235,},InodesUsed:&UInt64Value{Value:103,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bebbc30f-0e7f-42d5-b9bb-6ef72e187e3a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 22:37:51 functional-351278 crio[4941]: time="2025-09-19 22:37:51.493836827Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9575c308-392a-4596-bc8f-cae4281bb394 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 22:37:51 functional-351278 crio[4941]: time="2025-09-19 22:37:51.493966782Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9575c308-392a-4596-bc8f-cae4281bb394 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 22:37:51 functional-351278 crio[4941]: time="2025-09-19 22:37:51.494235091Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5a787edfb8f9c734d3f5475054443381d7a2a79092acbf9a206f87afe5b0a0c5,PodSandboxId:06fdd7820d034f4ae92e5d2d96796018617b28fd316023a16e12d5852a4a5a3f,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1758321162451285058,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dc475075-c430-4387-b9b0-f728391024f1,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f830720c30812bcf9aa33ac20c632924b1c7120a65fd62b5dcfd937a7ae0e617,PodSandboxId:179e1cbc0b9cb7eb9b9f338b3f22ceeb08891407c365ac303f4dd9c6fed4143d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1758320675216064724,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd642869-ae38-45a0-ad01-05dcfff73601,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72cb2cb146fe5847510292057e63e40a96fd64e4f4a76d8696090e6a48ed4351,PodSandboxId:26fc8596bfe94da02bdc7e7484fa0c2ca0695b9450116772839e3a85a024b3ab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1758320675254838006,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gnc2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa84af28-0f0b-418e-9cd9-fa7851c365b3,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6483900c50ab2f70850376c9f661c10fc004d33319d8bf76f53dbc28e3259c8b,PodSandboxId:d7316350ed4bf51f55c971d0eed2f6c875ccfdd174aca0edfa500a9128027f8f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1758320670920170425,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb349a09d58d71c20ea5d2b4f09b994c,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protoc
ol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca9ceb7aad18b856b069a53fa36528acb133c0db8d9b22639ad02163b2a8d256,PodSandboxId:dfc27b667d8fb9502bb99e124254f314b5ede7a16da906e21922b48b9813d110,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1758320670593619725,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 820510e9b04676b04325640c25f4a8e8,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.port
s: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:300bb2063f5a43edc6035ff49f65d585ce99e18457f8ac56a6d29b175cf12e8e,PodSandboxId:50d153de402bbaf6578165f458cd05f74f00bba57e724d93d9ac9e169199de44,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1758320670610434339,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 12680b25168cedd6455f72c7610a1ddf,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:654bd01629fdeca3657986cd26bfffe3f8a55afa36789f47b07bc6d11d84a780,PodSandboxId:eb5586f63d27cbb5f06cca606652dc3863f5c2a62ae63f01b8466cdc4199c0f8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1758320670574957041,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kuber
netes.pod.name: kube-scheduler-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67dcfe94bb7274dbb92a0f8ed062f93a,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d062b037dd6fb189144a98d2de6f1c0d776f2e6505f52278691cb46e10a23d74,PodSandboxId:16b9b7066729ddc70087ac98b62dbfca1d172e0e6e3fbdbcd4b64c1188c97b60,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,Creat
edAt:1758320666084991194,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-kcks9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d98d841-9889-4f5a-b46d-4c15a211c7e9,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48123667739fa5c6e4b0d2d5215d09e671990614ccd0d3d4fc54cf1ff733e76f,PodSandboxId:dfc27b667d8fb9502bb99e124254f314b5ed
e7a16da906e21922b48b9813d110,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1758320665174114998,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 820510e9b04676b04325640c25f4a8e8,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18c2d509c5728590a77ee037c7f0994bafee3
5e8bf16b51258dee0762a0a53b1,PodSandboxId:179e1cbc0b9cb7eb9b9f338b3f22ceeb08891407c365ac303f4dd9c6fed4143d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1758320665022788748,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd642869-ae38-45a0-ad01-05dcfff73601,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb8f63dfd7cc4378b1a91b1e35eef735c40ccd151cff64e84d
64aa5de2121aae,PodSandboxId:26fc8596bfe94da02bdc7e7484fa0c2ca0695b9450116772839e3a85a024b3ab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1758320664876226298,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gnc2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa84af28-0f0b-418e-9cd9-fa7851c365b3,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a41d060c833a02a6404078af8ea2569c9f77f65721fd004704b5b3e80d50bd61,PodSandboxId:eb5586f
63d27cbb5f06cca606652dc3863f5c2a62ae63f01b8466cdc4199c0f8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1758320664849656479,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67dcfe94bb7274dbb92a0f8ed062f93a,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePerio
d: 30,},},&Container{Id:10c598550cc09fff82686c0933621cfc45b23d13c3546265f21a53cd4c27111f,PodSandboxId:50d153de402bbaf6578165f458cd05f74f00bba57e724d93d9ac9e169199de44,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1758320664820547401,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12680b25168cedd6455f72c7610a1ddf,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9732f6304c59d282334ca0d28d637dd10341a1c86fd79880df7b7519a377526e,PodSandboxId:6467ca6fcd998e43c87b7a92b68a257970d17335a3ee9c72c87257f3a6f6c0c5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1758320628073012264,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-kcks9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d98d841-9889-4f5a-b46d-4c15a211c7e9,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9575c308-392a-4596-bc8f-cae4281bb394 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 22:37:51 functional-351278 crio[4941]: time="2025-09-19 22:37:51.542239259Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6610e4bc-7877-4d49-8bb6-78318d50edb8 name=/runtime.v1.RuntimeService/Version
	Sep 19 22:37:51 functional-351278 crio[4941]: time="2025-09-19 22:37:51.542338581Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6610e4bc-7877-4d49-8bb6-78318d50edb8 name=/runtime.v1.RuntimeService/Version
	Sep 19 22:37:51 functional-351278 crio[4941]: time="2025-09-19 22:37:51.543635439Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8b3c63fe-69bb-4c7e-ad02-e7acb05160ca name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 22:37:51 functional-351278 crio[4941]: time="2025-09-19 22:37:51.544345846Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758321471544322604,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:201235,},InodesUsed:&UInt64Value{Value:103,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8b3c63fe-69bb-4c7e-ad02-e7acb05160ca name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 22:37:51 functional-351278 crio[4941]: time="2025-09-19 22:37:51.545065672Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c950c0e8-61da-472c-90b1-08d66136edf7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 22:37:51 functional-351278 crio[4941]: time="2025-09-19 22:37:51.545455381Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c950c0e8-61da-472c-90b1-08d66136edf7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 22:37:51 functional-351278 crio[4941]: time="2025-09-19 22:37:51.546153682Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5a787edfb8f9c734d3f5475054443381d7a2a79092acbf9a206f87afe5b0a0c5,PodSandboxId:06fdd7820d034f4ae92e5d2d96796018617b28fd316023a16e12d5852a4a5a3f,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1758321162451285058,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dc475075-c430-4387-b9b0-f728391024f1,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f830720c30812bcf9aa33ac20c632924b1c7120a65fd62b5dcfd937a7ae0e617,PodSandboxId:179e1cbc0b9cb7eb9b9f338b3f22ceeb08891407c365ac303f4dd9c6fed4143d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1758320675216064724,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd642869-ae38-45a0-ad01-05dcfff73601,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72cb2cb146fe5847510292057e63e40a96fd64e4f4a76d8696090e6a48ed4351,PodSandboxId:26fc8596bfe94da02bdc7e7484fa0c2ca0695b9450116772839e3a85a024b3ab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1758320675254838006,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gnc2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa84af28-0f0b-418e-9cd9-fa7851c365b3,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6483900c50ab2f70850376c9f661c10fc004d33319d8bf76f53dbc28e3259c8b,PodSandboxId:d7316350ed4bf51f55c971d0eed2f6c875ccfdd174aca0edfa500a9128027f8f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1758320670920170425,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb349a09d58d71c20ea5d2b4f09b994c,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protoc
ol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca9ceb7aad18b856b069a53fa36528acb133c0db8d9b22639ad02163b2a8d256,PodSandboxId:dfc27b667d8fb9502bb99e124254f314b5ede7a16da906e21922b48b9813d110,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1758320670593619725,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 820510e9b04676b04325640c25f4a8e8,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.port
s: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:300bb2063f5a43edc6035ff49f65d585ce99e18457f8ac56a6d29b175cf12e8e,PodSandboxId:50d153de402bbaf6578165f458cd05f74f00bba57e724d93d9ac9e169199de44,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1758320670610434339,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 12680b25168cedd6455f72c7610a1ddf,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:654bd01629fdeca3657986cd26bfffe3f8a55afa36789f47b07bc6d11d84a780,PodSandboxId:eb5586f63d27cbb5f06cca606652dc3863f5c2a62ae63f01b8466cdc4199c0f8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1758320670574957041,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kuber
netes.pod.name: kube-scheduler-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67dcfe94bb7274dbb92a0f8ed062f93a,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d062b037dd6fb189144a98d2de6f1c0d776f2e6505f52278691cb46e10a23d74,PodSandboxId:16b9b7066729ddc70087ac98b62dbfca1d172e0e6e3fbdbcd4b64c1188c97b60,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,Creat
edAt:1758320666084991194,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-kcks9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d98d841-9889-4f5a-b46d-4c15a211c7e9,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48123667739fa5c6e4b0d2d5215d09e671990614ccd0d3d4fc54cf1ff733e76f,PodSandboxId:dfc27b667d8fb9502bb99e124254f314b5ed
e7a16da906e21922b48b9813d110,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1758320665174114998,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 820510e9b04676b04325640c25f4a8e8,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18c2d509c5728590a77ee037c7f0994bafee3
5e8bf16b51258dee0762a0a53b1,PodSandboxId:179e1cbc0b9cb7eb9b9f338b3f22ceeb08891407c365ac303f4dd9c6fed4143d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1758320665022788748,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd642869-ae38-45a0-ad01-05dcfff73601,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb8f63dfd7cc4378b1a91b1e35eef735c40ccd151cff64e84d
64aa5de2121aae,PodSandboxId:26fc8596bfe94da02bdc7e7484fa0c2ca0695b9450116772839e3a85a024b3ab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1758320664876226298,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gnc2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa84af28-0f0b-418e-9cd9-fa7851c365b3,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a41d060c833a02a6404078af8ea2569c9f77f65721fd004704b5b3e80d50bd61,PodSandboxId:eb5586f
63d27cbb5f06cca606652dc3863f5c2a62ae63f01b8466cdc4199c0f8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1758320664849656479,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67dcfe94bb7274dbb92a0f8ed062f93a,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePerio
d: 30,},},&Container{Id:10c598550cc09fff82686c0933621cfc45b23d13c3546265f21a53cd4c27111f,PodSandboxId:50d153de402bbaf6578165f458cd05f74f00bba57e724d93d9ac9e169199de44,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1758320664820547401,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12680b25168cedd6455f72c7610a1ddf,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9732f6304c59d282334ca0d28d637dd10341a1c86fd79880df7b7519a377526e,PodSandboxId:6467ca6fcd998e43c87b7a92b68a257970d17335a3ee9c72c87257f3a6f6c0c5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1758320628073012264,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-kcks9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d98d841-9889-4f5a-b46d-4c15a211c7e9,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c950c0e8-61da-472c-90b1-08d66136edf7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5a787edfb8f9c       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   5 minutes ago       Exited              mount-munger              0                   06fdd7820d034       busybox-mount
	72cb2cb146fe5       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                      13 minutes ago      Running             kube-proxy                3                   26fc8596bfe94       kube-proxy-gnc2m
	f830720c30812       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Running             storage-provisioner       4                   179e1cbc0b9cb       storage-provisioner
	6483900c50ab2       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                      13 minutes ago      Running             kube-apiserver            0                   d7316350ed4bf       kube-apiserver-functional-351278
	300bb2063f5a4       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                      13 minutes ago      Running             kube-controller-manager   3                   50d153de402bb       kube-controller-manager-functional-351278
	ca9ceb7aad18b       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      13 minutes ago      Running             etcd                      3                   dfc27b667d8fb       etcd-functional-351278
	654bd01629fde       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                      13 minutes ago      Running             kube-scheduler            3                   eb5586f63d27c       kube-scheduler-functional-351278
	d062b037dd6fb       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      13 minutes ago      Running             coredns                   2                   16b9b7066729d       coredns-66bc5c9577-kcks9
	48123667739fa       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      13 minutes ago      Exited              etcd                      2                   dfc27b667d8fb       etcd-functional-351278
	18c2d509c5728       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       3                   179e1cbc0b9cb       storage-provisioner
	eb8f63dfd7cc4       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                      13 minutes ago      Exited              kube-proxy                2                   26fc8596bfe94       kube-proxy-gnc2m
	a41d060c833a0       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                      13 minutes ago      Exited              kube-scheduler            2                   eb5586f63d27c       kube-scheduler-functional-351278
	10c598550cc09       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                      13 minutes ago      Exited              kube-controller-manager   2                   50d153de402bb       kube-controller-manager-functional-351278
	9732f6304c59d       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      14 minutes ago      Exited              coredns                   1                   6467ca6fcd998       coredns-66bc5c9577-kcks9
	
	
	==> coredns [9732f6304c59d282334ca0d28d637dd10341a1c86fd79880df7b7519a377526e] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51843 - 42718 "HINFO IN 3262367968470923463.4546114080195651747. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.087807319s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [d062b037dd6fb189144a98d2de6f1c0d776f2e6505f52278691cb46e10a23d74] <==
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37152 - 37023 "HINFO IN 8047511030750463407.6215404466120594246. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.460804898s
	
	
	==> describe nodes <==
	Name:               functional-351278
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-351278
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=functional-351278
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_19T22_22_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:22:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-351278
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:37:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:35:37 +0000   Fri, 19 Sep 2025 22:22:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:35:37 +0000   Fri, 19 Sep 2025 22:22:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:35:37 +0000   Fri, 19 Sep 2025 22:22:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:35:37 +0000   Fri, 19 Sep 2025 22:22:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.95
	  Hostname:    functional-351278
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008584Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008584Ki
	  pods:               110
	System Info:
	  Machine ID:                 e9fbec12c9cc4da5a4819f98a915eadd
	  System UUID:                e9fbec12-c9cc-4da5-a481-9f98a915eadd
	  Boot ID:                    0ec8fcd8-9c60-4abf-860c-295d8944fa7f
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-hxq2h                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  default                     hello-node-connect-7d85dfc575-47ht6           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  default                     mysql-5bb876957f-2hghz                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    12m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 coredns-66bc5c9577-kcks9                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     14m
	  kube-system                 etcd-functional-351278                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         15m
	  kube-system                 kube-apiserver-functional-351278              250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-functional-351278     200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-gnc2m                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-functional-351278              100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-wzzm4    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-htp94         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 14m                kube-proxy       
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m                kubelet          Node functional-351278 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m                kubelet          Node functional-351278 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m                kubelet          Node functional-351278 status is now: NodeHasSufficientPID
	  Normal  NodeReady                15m                kubelet          Node functional-351278 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           14m                node-controller  Node functional-351278 event: Registered Node functional-351278 in Controller
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node functional-351278 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node functional-351278 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node functional-351278 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           14m                node-controller  Node functional-351278 event: Registered Node functional-351278 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node functional-351278 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node functional-351278 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node functional-351278 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node functional-351278 event: Registered Node functional-351278 in Controller
	
	
	==> dmesg <==
	[  +0.000048] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.000519] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.191929] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000020] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.089286] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.095437] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.128311] kauditd_printk_skb: 171 callbacks suppressed
	[  +1.053733] kauditd_printk_skb: 18 callbacks suppressed
	[Sep19 22:23] kauditd_printk_skb: 220 callbacks suppressed
	[  +0.109627] kauditd_printk_skb: 11 callbacks suppressed
	[  +4.558363] kauditd_printk_skb: 243 callbacks suppressed
	[Sep19 22:24] kauditd_printk_skb: 137 callbacks suppressed
	[  +0.111375] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.310472] kauditd_printk_skb: 78 callbacks suppressed
	[  +0.126850] kauditd_printk_skb: 272 callbacks suppressed
	[  +5.561032] kauditd_printk_skb: 102 callbacks suppressed
	[  +4.743320] kauditd_printk_skb: 62 callbacks suppressed
	[  +0.048897] kauditd_printk_skb: 90 callbacks suppressed
	[Sep19 22:25] kauditd_printk_skb: 65 callbacks suppressed
	[ +24.004450] kauditd_printk_skb: 74 callbacks suppressed
	[Sep19 22:32] kauditd_printk_skb: 26 callbacks suppressed
	[  +7.450214] kauditd_printk_skb: 25 callbacks suppressed
	[Sep19 22:35] crun[9389]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	[  +2.982311] kauditd_printk_skb: 74 callbacks suppressed
	
	
	==> etcd [48123667739fa5c6e4b0d2d5215d09e671990614ccd0d3d4fc54cf1ff733e76f] <==
	{"level":"warn","ts":"2025-09-19T22:24:26.076069Z","caller":"embed/config.go:1209","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2025-09-19T22:24:26.076081Z","caller":"embed/etcd.go:138","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.39.95:2380"]}
	{"level":"info","ts":"2025-09-19T22:24:26.076113Z","caller":"embed/etcd.go:544","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-09-19T22:24:26.076644Z","caller":"embed/etcd.go:146","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.95:2379"]}
	{"level":"info","ts":"2025-09-19T22:24:26.076833Z","caller":"embed/etcd.go:323","msg":"starting an etcd server","etcd-version":"3.6.4","git-sha":"5400cdc","go-version":"go1.23.11","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"functional-351278","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.39.95:2380"],"listen-peer-urls":["https://192.168.39.95:2380"],"advertise-client-urls":["https://192.168.39.95:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.95:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"experimental-local-address":"","cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cl
uster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"feature-gates":"InitialCorruptCheck=true","initial-corrupt-check":false,"corrupt-check-time-interval":"0s","compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","discovery-token":"","discovery-endpoints":"","discovery-dial-timeout":"2s","discovery-request-timeout":"5s","discovery-keepalive-time":"2s","discovery-keepalive-timeout":"6s","discovery-insecure-transport":true,"discovery-insecure-skip-tls-verify":false,"discovery-cert":"","discovery-key":"","discovery-cacert":"","discovery-user":"","downgrade-check-interval":"5s","max-learners":1,"v2-deprecation":"write-only"}
	{"level":"info","ts":"2025-09-19T22:24:26.077611Z","logger":"bbolt","caller":"backend/backend.go:203","msg":"Opening db file (/var/lib/minikube/etcd/member/snap/db) with mode -rw------- and with options: {Timeout: 0s, NoGrowSync: false, NoFreelistSync: true, PreLoadFreelist: false, FreelistType: hashmap, ReadOnly: false, MmapFlags: 8000, InitialMmapSize: 10737418240, PageSize: 0, NoSync: false, OpenFile: 0x0, Mlock: false, Logger: 0xc000121390}"}
	{"level":"info","ts":"2025-09-19T22:24:26.095385Z","logger":"bbolt","caller":"bbolt@v1.4.2/db.go:321","msg":"Opening bbolt db (/var/lib/minikube/etcd/member/snap/db) successfully"}
	{"level":"info","ts":"2025-09-19T22:24:26.095456Z","caller":"storage/backend.go:80","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"17.930871ms"}
	{"level":"info","ts":"2025-09-19T22:24:26.095504Z","caller":"etcdserver/bootstrap.go:220","msg":"restore consistentIndex","index":567}
	{"level":"info","ts":"2025-09-19T22:24:26.119624Z","caller":"etcdserver/bootstrap.go:441","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2025-09-19T22:24:26.122969Z","caller":"etcdserver/bootstrap.go:232","msg":"recovered v3 backend","backend-size-bytes":1179648,"backend-size":"1.2 MB","backend-size-in-use-bytes":1134592,"backend-size-in-use":"1.1 MB"}
	{"level":"info","ts":"2025-09-19T22:24:26.123965Z","caller":"etcdserver/bootstrap.go:90","msg":"Bootstrapping WAL from snapshot"}
	{"level":"info","ts":"2025-09-19T22:24:26.160224Z","caller":"etcdserver/bootstrap.go:599","msg":"restarting local member","cluster-id":"986e33f48d4d13ba","local-member-id":"a71e7bac075997","commit-index":567}
	{"level":"info","ts":"2025-09-19T22:24:26.162606Z","caller":"etcdserver/bootstrap.go:94","msg":"bootstrapping cluster"}
	{"level":"info","ts":"2025-09-19T22:24:26.164048Z","caller":"etcdserver/bootstrap.go:101","msg":"bootstrapping storage"}
	{"level":"info","ts":"2025-09-19T22:24:26.175855Z","caller":"membership/cluster.go:605","msg":"Detected member only in v3store but missing in v2store","member":"{ID:a71e7bac075997 RaftAttributes:{PeerURLs:[https://192.168.39.95:2380] IsLearner:false} Attributes:{Name:functional-351278 ClientURLs:[https://192.168.39.95:2379]}}"}
	{"level":"info","ts":"2025-09-19T22:24:26.178255Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2025-09-19T22:24:26.178279Z","caller":"membership/cluster.go:297","msg":"recovered/added member from store","cluster-id":"986e33f48d4d13ba","local-member-id":"a71e7bac075997","recovered-remote-peer-id":"a71e7bac075997","recovered-remote-peer-urls":["https://192.168.39.95:2380"],"recovered-remote-peer-is-learner":false}
	{"level":"info","ts":"2025-09-19T22:24:26.178430Z","caller":"membership/cluster.go:307","msg":"set cluster version from store","cluster-version":"3.6"}
	{"level":"info","ts":"2025-09-19T22:24:26.178446Z","caller":"etcdserver/bootstrap.go:109","msg":"bootstrapping raft"}
	{"level":"info","ts":"2025-09-19T22:24:26.181063Z","caller":"etcdserver/server.go:312","msg":"bootstrap successfully"}
	{"level":"info","ts":"2025-09-19T22:24:26.182197Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"a71e7bac075997 switched to configuration voters=()"}
	{"level":"info","ts":"2025-09-19T22:24:26.182232Z","logger":"raft","caller":"v3@v3.6.0/raft.go:897","msg":"a71e7bac075997 became follower at term 3"}
	{"level":"info","ts":"2025-09-19T22:24:26.182240Z","logger":"raft","caller":"v3@v3.6.0/raft.go:493","msg":"newRaft a71e7bac075997 [peers: [], term: 3, commit: 567, applied: 0, lastindex: 567, lastterm: 3]"}
	{"level":"warn","ts":"2025-09-19T22:24:26.193088Z","caller":"auth/store.go:1135","msg":"simple token is not cryptographically signed"}
	
	
	==> etcd [ca9ceb7aad18b856b069a53fa36528acb133c0db8d9b22639ad02163b2a8d256] <==
	{"level":"warn","ts":"2025-09-19T22:24:32.996830Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:33.005341Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:33.017665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:33.035124Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:33.045969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:33.059781Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:33.082181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:33.102768Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:33.122632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:33.133996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:33.148518Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:33.159167Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:33.173818Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:33.188995Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:33.203658Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:33.215842Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:33.227397Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:33.262460Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:33.272505Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:33.305766Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:33.321449Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:33.361840Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56662","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-19T22:34:32.200434Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":985}
	{"level":"info","ts":"2025-09-19T22:34:32.216240Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":985,"took":"15.353883ms","hash":2834440057,"current-db-size-bytes":3338240,"current-db-size":"3.3 MB","current-db-size-in-use-bytes":3338240,"current-db-size-in-use":"3.3 MB"}
	{"level":"info","ts":"2025-09-19T22:34:32.216282Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2834440057,"revision":985,"compact-revision":-1}
	
	
	==> kernel <==
	 22:37:51 up 15 min,  0 users,  load average: 0.14, 0.30, 0.24
	Linux functional-351278 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Sep  9 02:24:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [6483900c50ab2f70850376c9f661c10fc004d33319d8bf76f53dbc28e3259c8b] <==
	I0919 22:25:05.132702       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.99.132.69"}
	I0919 22:25:42.529501       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:25:57.346320       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:27:04.152951       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:27:06.674054       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:28:11.739177       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:28:34.539000       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:29:14.651185       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:29:40.362928       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:30:20.976132       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:30:47.885983       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:31:22.666809       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:31:47.895976       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:32:48.070414       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:32:51.478440       1 controller.go:667] quota admission added evaluator for: namespaces
	I0919 22:32:51.759720       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.83.102"}
	I0919 22:32:51.798748       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.48.82"}
	I0919 22:33:10.825978       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:34:07.268548       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:34:16.923720       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:34:34.169714       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0919 22:35:29.342027       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:35:31.170333       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:36:41.468226       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:36:43.375788       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [10c598550cc09fff82686c0933621cfc45b23d13c3546265f21a53cd4c27111f] <==
	I0919 22:24:26.606554       1 serving.go:386] Generated self-signed cert in-memory
	I0919 22:24:27.254668       1 controllermanager.go:191] "Starting" version="v1.34.0"
	I0919 22:24:27.254704       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 22:24:27.257310       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0919 22:24:27.257444       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0919 22:24:27.257720       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I0919 22:24:27.257745       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	
	
	==> kube-controller-manager [300bb2063f5a43edc6035ff49f65d585ce99e18457f8ac56a6d29b175cf12e8e] <==
	I0919 22:24:37.531489       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0919 22:24:37.531599       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0919 22:24:37.531619       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0919 22:24:37.538613       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0919 22:24:37.539827       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0919 22:24:37.544369       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0919 22:24:37.544487       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0919 22:24:37.544655       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0919 22:24:37.544722       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-351278"
	I0919 22:24:37.544777       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0919 22:24:37.545184       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0919 22:24:37.547281       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0919 22:24:37.550109       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0919 22:24:37.551543       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0919 22:24:37.559825       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0919 22:24:37.566126       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	E0919 22:32:51.577007       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0919 22:32:51.589036       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0919 22:32:51.610193       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0919 22:32:51.616342       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0919 22:32:51.628208       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0919 22:32:51.629041       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0919 22:32:51.639852       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0919 22:32:51.641267       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0919 22:32:51.647802       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [72cb2cb146fe5847510292057e63e40a96fd64e4f4a76d8696090e6a48ed4351] <==
	I0919 22:24:35.554555       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0919 22:24:35.656289       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0919 22:24:35.656600       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.95"]
	E0919 22:24:35.656975       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 22:24:35.712360       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0919 22:24:35.712449       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0919 22:24:35.712475       1 server_linux.go:132] "Using iptables Proxier"
	I0919 22:24:35.734784       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 22:24:35.735228       1 server.go:527] "Version info" version="v1.34.0"
	I0919 22:24:35.735258       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 22:24:35.737742       1 config.go:106] "Starting endpoint slice config controller"
	I0919 22:24:35.737851       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0919 22:24:35.743832       1 config.go:200] "Starting service config controller"
	I0919 22:24:35.744069       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0919 22:24:35.744808       1 config.go:403] "Starting serviceCIDR config controller"
	I0919 22:24:35.744830       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0919 22:24:35.745343       1 config.go:309] "Starting node config controller"
	I0919 22:24:35.745354       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0919 22:24:35.745359       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0919 22:24:35.838538       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0919 22:24:35.844699       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0919 22:24:35.845223       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [eb8f63dfd7cc4378b1a91b1e35eef735c40ccd151cff64e84d64aa5de2121aae] <==
	I0919 22:24:25.364860       1 server_linux.go:53] "Using iptables proxy"
	I0919 22:24:25.532215       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E0919 22:24:25.538045       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-351278&limit=500&resourceVersion=0\": dial tcp 192.168.39.95:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	
	
	==> kube-scheduler [654bd01629fdeca3657986cd26bfffe3f8a55afa36789f47b07bc6d11d84a780] <==
	I0919 22:24:33.397444       1 serving.go:386] Generated self-signed cert in-memory
	I0919 22:24:34.211621       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0919 22:24:34.211706       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 22:24:34.220243       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0919 22:24:34.220330       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I0919 22:24:34.220402       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 22:24:34.220431       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 22:24:34.220457       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0919 22:24:34.220473       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0919 22:24:34.220485       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0919 22:24:34.220560       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0919 22:24:34.321480       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0919 22:24:34.321537       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I0919 22:24:34.321583       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [a41d060c833a02a6404078af8ea2569c9f77f65721fd004704b5b3e80d50bd61] <==
	I0919 22:24:26.731449       1 serving.go:386] Generated self-signed cert in-memory
	W0919 22:24:27.211844       1 authentication.go:397] Error looking up in-cluster authentication configuration: Get "https://192.168.39.95:8441/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 192.168.39.95:8441: connect: connection refused
	W0919 22:24:27.212028       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0919 22:24:27.212120       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0919 22:24:27.235673       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0919 22:24:27.235715       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E0919 22:24:27.235731       1 event.go:401] "Unable start event watcher (will not retry!)" err="broadcaster already stopped"
	I0919 22:24:27.237834       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0919 22:24:27.238080       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0919 22:24:27.238148       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 22:24:27.238179       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0919 22:24:27.238622       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.39.95:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.95:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0919 22:24:27.238839       1 server.go:286] "handlers are not fully synchronized" err="context canceled"
	E0919 22:24:27.239533       1 shared_informer.go:352] "Unable to sync caches" logger="UnhandledError" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 22:24:27.239623       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0919 22:24:27.239664       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0919 22:24:27.239699       1 configmap_cafile_content.go:213] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 22:24:27.239717       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0919 22:24:27.239793       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0919 22:24:27.239824       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 19 22:37:10 functional-351278 kubelet[6067]: E0919 22:37:10.279184    6067 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321430278754173  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201235}  inodes_used:{value:103}}"
	Sep 19 22:37:10 functional-351278 kubelet[6067]: E0919 22:37:10.279210    6067 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321430278754173  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201235}  inodes_used:{value:103}}"
	Sep 19 22:37:11 functional-351278 kubelet[6067]: E0919 22:37:11.957152    6067 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:27637a97e3d1d0518adc2a877b60db3779970f19474b6e586ddcbc2d5500e285 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="016b4f51-c35b-4a5d-890c-75d3735f5b43"
	Sep 19 22:37:17 functional-351278 kubelet[6067]: E0919 22:37:17.989403    6067 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	Sep 19 22:37:17 functional-351278 kubelet[6067]: E0919 22:37:17.989462    6067 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	Sep 19 22:37:17 functional-351278 kubelet[6067]: E0919 22:37:17.989654    6067 kuberuntime_manager.go:1449] "Unhandled Error" err="container mysql start failed in pod mysql-5bb876957f-2hghz_default(e83df643-ba88-420d-ade1-6c4a474cd6fd): ErrImagePull: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 19 22:37:17 functional-351278 kubelet[6067]: E0919 22:37:17.989686    6067 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ErrImagePull: \"reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-2hghz" podUID="e83df643-ba88-420d-ade1-6c4a474cd6fd"
	Sep 19 22:37:20 functional-351278 kubelet[6067]: E0919 22:37:20.282153    6067 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321440280985853  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201235}  inodes_used:{value:103}}"
	Sep 19 22:37:20 functional-351278 kubelet[6067]: E0919 22:37:20.282195    6067 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321440280985853  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201235}  inodes_used:{value:103}}"
	Sep 19 22:37:25 functional-351278 kubelet[6067]: E0919 22:37:25.956703    6067 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:27637a97e3d1d0518adc2a877b60db3779970f19474b6e586ddcbc2d5500e285 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="016b4f51-c35b-4a5d-890c-75d3735f5b43"
	Sep 19 22:37:29 functional-351278 kubelet[6067]: E0919 22:37:29.958283    6067 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-2hghz" podUID="e83df643-ba88-420d-ade1-6c4a474cd6fd"
	Sep 19 22:37:30 functional-351278 kubelet[6067]: E0919 22:37:30.063466    6067 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod1d98d841-9889-4f5a-b46d-4c15a211c7e9/crio-6467ca6fcd998e43c87b7a92b68a257970d17335a3ee9c72c87257f3a6f6c0c5: Error finding container 6467ca6fcd998e43c87b7a92b68a257970d17335a3ee9c72c87257f3a6f6c0c5: Status 404 returned error can't find the container with id 6467ca6fcd998e43c87b7a92b68a257970d17335a3ee9c72c87257f3a6f6c0c5
	Sep 19 22:37:30 functional-351278 kubelet[6067]: E0919 22:37:30.284998    6067 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321450284152491  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201235}  inodes_used:{value:103}}"
	Sep 19 22:37:30 functional-351278 kubelet[6067]: E0919 22:37:30.285043    6067 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321450284152491  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201235}  inodes_used:{value:103}}"
	Sep 19 22:37:37 functional-351278 kubelet[6067]: E0919 22:37:37.953390    6067 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:27637a97e3d1d0518adc2a877b60db3779970f19474b6e586ddcbc2d5500e285 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="016b4f51-c35b-4a5d-890c-75d3735f5b43"
	Sep 19 22:37:40 functional-351278 kubelet[6067]: E0919 22:37:40.288066    6067 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321460287487052  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201235}  inodes_used:{value:103}}"
	Sep 19 22:37:40 functional-351278 kubelet[6067]: E0919 22:37:40.288091    6067 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321460287487052  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201235}  inodes_used:{value:103}}"
	Sep 19 22:37:43 functional-351278 kubelet[6067]: E0919 22:37:43.955638    6067 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-2hghz" podUID="e83df643-ba88-420d-ade1-6c4a474cd6fd"
	Sep 19 22:37:48 functional-351278 kubelet[6067]: E0919 22:37:48.667569    6067 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Sep 19 22:37:48 functional-351278 kubelet[6067]: E0919 22:37:48.667630    6067 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Sep 19 22:37:48 functional-351278 kubelet[6067]: E0919 22:37:48.667861    6067 kuberuntime_manager.go:1449] "Unhandled Error" err="container echo-server start failed in pod hello-node-connect-7d85dfc575-47ht6_default(3af4e468-7894-42f7-8bf3-23cebdaddd0c): ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 19 22:37:48 functional-351278 kubelet[6067]: E0919 22:37:48.667951    6067 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-47ht6" podUID="3af4e468-7894-42f7-8bf3-23cebdaddd0c"
	Sep 19 22:37:49 functional-351278 kubelet[6067]: E0919 22:37:49.954500    6067 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:27637a97e3d1d0518adc2a877b60db3779970f19474b6e586ddcbc2d5500e285 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="016b4f51-c35b-4a5d-890c-75d3735f5b43"
	Sep 19 22:37:50 functional-351278 kubelet[6067]: E0919 22:37:50.290221    6067 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321470289769445  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201235}  inodes_used:{value:103}}"
	Sep 19 22:37:50 functional-351278 kubelet[6067]: E0919 22:37:50.290391    6067 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321470289769445  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:201235}  inodes_used:{value:103}}"
	
	
	==> storage-provisioner [18c2d509c5728590a77ee037c7f0994bafee35e8bf16b51258dee0762a0a53b1] <==
	I0919 22:24:25.563398       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0919 22:24:25.566735       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [f830720c30812bcf9aa33ac20c632924b1c7120a65fd62b5dcfd937a7ae0e617] <==
	W0919 22:37:27.179431       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:37:29.182987       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:37:29.188279       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:37:31.192376       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:37:31.198200       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:37:33.201283       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:37:33.206544       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:37:35.210948       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:37:35.216571       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:37:37.220756       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:37:37.230099       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:37:39.234353       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:37:39.240621       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:37:41.245052       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:37:41.250645       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:37:43.254099       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:37:43.264399       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:37:45.268395       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:37:45.274264       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:37:47.278659       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:37:47.288161       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:37:49.292450       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:37:49.301229       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:37:51.305674       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:37:51.318849       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-351278 -n functional-351278
helpers_test.go:269: (dbg) Run:  kubectl --context functional-351278 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-hxq2h hello-node-connect-7d85dfc575-47ht6 mysql-5bb876957f-2hghz sp-pod dashboard-metrics-scraper-77bf4d6c4c-wzzm4 kubernetes-dashboard-855c9754f9-htp94
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-351278 describe pod busybox-mount hello-node-75c85bcc94-hxq2h hello-node-connect-7d85dfc575-47ht6 mysql-5bb876957f-2hghz sp-pod dashboard-metrics-scraper-77bf4d6c4c-wzzm4 kubernetes-dashboard-855c9754f9-htp94
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-351278 describe pod busybox-mount hello-node-75c85bcc94-hxq2h hello-node-connect-7d85dfc575-47ht6 mysql-5bb876957f-2hghz sp-pod dashboard-metrics-scraper-77bf4d6c4c-wzzm4 kubernetes-dashboard-855c9754f9-htp94: exit status 1 (109.370212ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-351278/192.168.39.95
	Start Time:       Fri, 19 Sep 2025 22:31:10 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  mount-munger:
	    Container ID:  cri-o://5a787edfb8f9c734d3f5475054443381d7a2a79092acbf9a206f87afe5b0a0c5
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Fri, 19 Sep 2025 22:32:42 +0000
	      Finished:     Fri, 19 Sep 2025 22:32:42 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-n59bf (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-n59bf:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  6m42s  default-scheduler  Successfully assigned default/busybox-mount to functional-351278
	  Normal  Pulling    6m42s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5m10s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.33s (1m31.478s including waiting). Image size: 4631262 bytes.
	  Normal  Created    5m10s  kubelet            Created container: mount-munger
	  Normal  Started    5m10s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-hxq2h
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-351278/192.168.39.95
	Start Time:       Fri, 19 Sep 2025 22:25:05 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6jtdq (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-6jtdq:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  12m                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-hxq2h to functional-351278
	  Warning  Failed     6m13s (x3 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m8s (x4 over 10m)   kubelet            Error: ErrImagePull
	  Warning  Failed     3m8s                 kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    115s (x11 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     115s (x11 over 10m)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    104s (x5 over 12m)   kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-47ht6
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-351278/192.168.39.95
	Start Time:       Fri, 19 Sep 2025 22:25:01 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:           10.244.0.7
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-s6kzd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-s6kzd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  12m                  default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-47ht6 to functional-351278
	  Warning  Failed     11m                  kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    3m (x11 over 11m)    kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     3m (x11 over 11m)    kubelet            Error: ImagePullBackOff
	  Normal   Pulling    2m48s (x5 over 12m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     4s (x5 over 11m)     kubelet            Error: ErrImagePull
	  Warning  Failed     4s (x4 over 8m48s)   kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	
	
	Name:             mysql-5bb876957f-2hghz
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-351278/192.168.39.95
	Start Time:       Fri, 19 Sep 2025 22:24:58 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vjr59 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-vjr59:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  12m                    default-scheduler  Successfully assigned default/mysql-5bb876957f-2hghz to functional-351278
	  Warning  Failed     12m                    kubelet            Failed to pull image "docker.io/mysql:5.7": copying system image from manifest list: determining manifest MIME type for docker://mysql:5.7: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     7m14s (x2 over 9m19s)  kubelet            Failed to pull image "docker.io/mysql:5.7": fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    3m11s (x5 over 12m)    kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     35s (x5 over 12m)      kubelet            Error: ErrImagePull
	  Warning  Failed     35s (x2 over 4m39s)    kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    9s (x13 over 12m)      kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     9s (x13 over 12m)      kubelet            Error: ImagePullBackOff
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-351278/192.168.39.95
	Start Time:       Fri, 19 Sep 2025 22:25:06 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zqhg9 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-zqhg9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  12m                  default-scheduler  Successfully assigned default/sp-pod to functional-351278
	  Warning  Failed     7m47s (x2 over 10m)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    4m31s (x4 over 12m)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     65s (x4 over 10m)    kubelet            Error: ErrImagePull
	  Warning  Failed     65s (x2 over 5m11s)  kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:27637a97e3d1d0518adc2a877b60db3779970f19474b6e586ddcbc2d5500e285 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    3s (x9 over 10m)     kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     3s (x9 over 10m)     kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-wzzm4" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-htp94" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-351278 describe pod busybox-mount hello-node-75c85bcc94-hxq2h hello-node-connect-7d85dfc575-47ht6 mysql-5bb876957f-2hghz sp-pod dashboard-metrics-scraper-77bf4d6c4c-wzzm4 kubernetes-dashboard-855c9754f9-htp94: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (302.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-351278 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-351278 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-47ht6" [3af4e468-7894-42f7-8bf3-23cebdaddd0c] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-351278 -n functional-351278
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-09-19 22:35:01.429923546 +0000 UTC m=+1256.827897746
functional_test.go:1645: (dbg) Run:  kubectl --context functional-351278 describe po hello-node-connect-7d85dfc575-47ht6 -n default
functional_test.go:1645: (dbg) kubectl --context functional-351278 describe po hello-node-connect-7d85dfc575-47ht6 -n default:
Name:             hello-node-connect-7d85dfc575-47ht6
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-351278/192.168.39.95
Start Time:       Fri, 19 Sep 2025 22:25:01 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:           10.244.0.7
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-s6kzd (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-s6kzd:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-47ht6 to functional-351278
Warning  Failed     8m30s                kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    3m3s (x4 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     78s (x4 over 8m30s)  kubelet            Error: ErrImagePull
Warning  Failed     78s (x3 over 5m57s)  kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    9s (x11 over 8m30s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     9s (x11 over 8m30s)  kubelet            Error: ImagePullBackOff
functional_test.go:1645: (dbg) Run:  kubectl --context functional-351278 logs hello-node-connect-7d85dfc575-47ht6 -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-351278 logs hello-node-connect-7d85dfc575-47ht6 -n default: exit status 1 (71.138448ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-47ht6" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-351278 logs hello-node-connect-7d85dfc575-47ht6 -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-351278 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-47ht6
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-351278/192.168.39.95
Start Time:       Fri, 19 Sep 2025 22:25:01 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:           10.244.0.7
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-s6kzd (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-s6kzd:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-47ht6 to functional-351278
Warning  Failed     8m30s                kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    3m3s (x4 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     78s (x4 over 8m30s)  kubelet            Error: ErrImagePull
Warning  Failed     78s (x3 over 5m57s)  kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    9s (x11 over 8m30s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     9s (x11 over 8m30s)  kubelet            Error: ImagePullBackOff

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-351278 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-351278 logs -l app=hello-node-connect: exit status 1 (72.618379ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-47ht6" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-351278 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-351278 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.102.109.72
IPs:                      10.102.109.72
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31201/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-351278 -n functional-351278
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-351278 logs -n 25: (1.695801512s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                ARGS                                                                 │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-351278 ssh -- ls -la /mount-9p                                                                                           │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:31 UTC │ 19 Sep 25 22:31 UTC │
	│ ssh            │ functional-351278 ssh cat /mount-9p/test-1758321069045638215                                                                        │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:31 UTC │ 19 Sep 25 22:31 UTC │
	│ ssh            │ functional-351278 ssh stat /mount-9p/created-by-test                                                                                │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ ssh            │ functional-351278 ssh stat /mount-9p/created-by-pod                                                                                 │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ ssh            │ functional-351278 ssh sudo umount -f /mount-9p                                                                                      │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ ssh            │ functional-351278 ssh findmnt -T /mount-9p | grep 9p                                                                                │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ mount          │ -p functional-351278 /tmp/TestFunctionalparallelMountCmdspecific-port3719278976/001:/mount-9p --alsologtostderr -v=1 --port 46464   │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh            │ functional-351278 ssh findmnt -T /mount-9p | grep 9p                                                                                │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ ssh            │ functional-351278 ssh -- ls -la /mount-9p                                                                                           │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ ssh            │ functional-351278 ssh sudo umount -f /mount-9p                                                                                      │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh            │ functional-351278 ssh findmnt -T /mount1                                                                                            │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ mount          │ -p functional-351278 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2795973708/001:/mount1 --alsologtostderr -v=1                  │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ mount          │ -p functional-351278 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2795973708/001:/mount3 --alsologtostderr -v=1                  │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ mount          │ -p functional-351278 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2795973708/001:/mount2 --alsologtostderr -v=1                  │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh            │ functional-351278 ssh findmnt -T /mount1                                                                                            │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ ssh            │ functional-351278 ssh findmnt -T /mount2                                                                                            │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ ssh            │ functional-351278 ssh findmnt -T /mount3                                                                                            │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ mount          │ -p functional-351278 --kill=true                                                                                                    │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ start          │ -p functional-351278 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ start          │ -p functional-351278 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false           │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-351278 --alsologtostderr -v=1                                                                      │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ update-context │ functional-351278 update-context --alsologtostderr -v=2                                                                             │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │ 19 Sep 25 22:35 UTC │
	│ update-context │ functional-351278 update-context --alsologtostderr -v=2                                                                             │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │ 19 Sep 25 22:35 UTC │
	│ update-context │ functional-351278 update-context --alsologtostderr -v=2                                                                             │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │ 19 Sep 25 22:35 UTC │
	│ image          │ functional-351278 image ls --format short --alsologtostderr                                                                         │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/19 22:32:50
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 22:32:50.448994   28990 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:32:50.449267   28990 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:32:50.449277   28990 out.go:374] Setting ErrFile to fd 2...
	I0919 22:32:50.449281   28990 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:32:50.449466   28990 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14764/.minikube/bin
	I0919 22:32:50.449918   28990 out.go:368] Setting JSON to false
	I0919 22:32:50.450889   28990 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4497,"bootTime":1758316673,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 22:32:50.450979   28990 start.go:140] virtualization: kvm guest
	I0919 22:32:50.452926   28990 out.go:179] * [functional-351278] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0919 22:32:50.454152   28990 notify.go:220] Checking for updates...
	I0919 22:32:50.454178   28990 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 22:32:50.455249   28990 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 22:32:50.456277   28990 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-14764/kubeconfig
	I0919 22:32:50.457463   28990 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14764/.minikube
	I0919 22:32:50.458570   28990 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 22:32:50.459746   28990 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 22:32:50.461354   28990 config.go:182] Loaded profile config "functional-351278": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:32:50.461950   28990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 22:32:50.462017   28990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 22:32:50.475786   28990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43741
	I0919 22:32:50.476249   28990 main.go:141] libmachine: () Calling .GetVersion
	I0919 22:32:50.476777   28990 main.go:141] libmachine: Using API Version  1
	I0919 22:32:50.476805   28990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 22:32:50.477159   28990 main.go:141] libmachine: () Calling .GetMachineName
	I0919 22:32:50.477415   28990 main.go:141] libmachine: (functional-351278) Calling .DriverName
	I0919 22:32:50.477658   28990 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 22:32:50.478003   28990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 22:32:50.478045   28990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 22:32:50.492182   28990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36729
	I0919 22:32:50.492616   28990 main.go:141] libmachine: () Calling .GetVersion
	I0919 22:32:50.493058   28990 main.go:141] libmachine: Using API Version  1
	I0919 22:32:50.493081   28990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 22:32:50.493467   28990 main.go:141] libmachine: () Calling .GetMachineName
	I0919 22:32:50.493665   28990 main.go:141] libmachine: (functional-351278) Calling .DriverName
	I0919 22:32:50.526542   28990 out.go:179] * Using the kvm2 driver based on existing profile
	I0919 22:32:50.527765   28990 start.go:304] selected driver: kvm2
	I0919 22:32:50.527780   28990 start.go:918] validating driver "kvm2" against &{Name:functional-351278 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.37.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-351278 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.95 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:32:50.527887   28990 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 22:32:50.528840   28990 cni.go:84] Creating CNI manager for ""
	I0919 22:32:50.528908   28990 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0919 22:32:50.528959   28990 start.go:348] cluster config:
	{Name:functional-351278 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.37.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-351278 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.95 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:32:50.530330   28990 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Sep 19 22:35:02 functional-351278 crio[4941]: time="2025-09-19 22:35:02.627667661Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758321302627631333,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:175576,},InodesUsed:&UInt64Value{Value:87,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=01b0a88a-eaed-4a0c-8cac-0060a0d151ad name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 22:35:02 functional-351278 crio[4941]: time="2025-09-19 22:35:02.628547347Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=533c569f-e74d-49aa-a79c-0614611c05a4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 22:35:02 functional-351278 crio[4941]: time="2025-09-19 22:35:02.628628066Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=533c569f-e74d-49aa-a79c-0614611c05a4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 22:35:02 functional-351278 crio[4941]: time="2025-09-19 22:35:02.628973428Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5a787edfb8f9c734d3f5475054443381d7a2a79092acbf9a206f87afe5b0a0c5,PodSandboxId:06fdd7820d034f4ae92e5d2d96796018617b28fd316023a16e12d5852a4a5a3f,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1758321162451285058,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dc475075-c430-4387-b9b0-f728391024f1,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f830720c30812bcf9aa33ac20c632924b1c7120a65fd62b5dcfd937a7ae0e617,PodSandboxId:179e1cbc0b9cb7eb9b9f338b3f22ceeb08891407c365ac303f4dd9c6fed4143d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1758320675216064724,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd642869-ae38-45a0-ad01-05dcfff73601,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72cb2cb146fe5847510292057e63e40a96fd64e4f4a76d8696090e6a48ed4351,PodSandboxId:26fc8596bfe94da02bdc7e7484fa0c2ca0695b9450116772839e3a85a024b3ab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1758320675254838006,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gnc2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa84af28-0f0b-418e-9cd9-fa7851c365b3,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6483900c50ab2f70850376c9f661c10fc004d33319d8bf76f53dbc28e3259c8b,PodSandboxId:d7316350ed4bf51f55c971d0eed2f6c875ccfdd174aca0edfa500a9128027f8f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1758320670920170425,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb349a09d58d71c20ea5d2b4f09b994c,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protoc
ol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca9ceb7aad18b856b069a53fa36528acb133c0db8d9b22639ad02163b2a8d256,PodSandboxId:dfc27b667d8fb9502bb99e124254f314b5ede7a16da906e21922b48b9813d110,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1758320670593619725,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 820510e9b04676b04325640c25f4a8e8,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.port
s: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:300bb2063f5a43edc6035ff49f65d585ce99e18457f8ac56a6d29b175cf12e8e,PodSandboxId:50d153de402bbaf6578165f458cd05f74f00bba57e724d93d9ac9e169199de44,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1758320670610434339,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 12680b25168cedd6455f72c7610a1ddf,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:654bd01629fdeca3657986cd26bfffe3f8a55afa36789f47b07bc6d11d84a780,PodSandboxId:eb5586f63d27cbb5f06cca606652dc3863f5c2a62ae63f01b8466cdc4199c0f8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1758320670574957041,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kuber
netes.pod.name: kube-scheduler-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67dcfe94bb7274dbb92a0f8ed062f93a,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d062b037dd6fb189144a98d2de6f1c0d776f2e6505f52278691cb46e10a23d74,PodSandboxId:16b9b7066729ddc70087ac98b62dbfca1d172e0e6e3fbdbcd4b64c1188c97b60,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,Creat
edAt:1758320666084991194,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-kcks9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d98d841-9889-4f5a-b46d-4c15a211c7e9,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48123667739fa5c6e4b0d2d5215d09e671990614ccd0d3d4fc54cf1ff733e76f,PodSandboxId:dfc27b667d8fb9502bb99e124254f314b5ed
e7a16da906e21922b48b9813d110,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1758320665174114998,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 820510e9b04676b04325640c25f4a8e8,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18c2d509c5728590a77ee037c7f0994bafee3
5e8bf16b51258dee0762a0a53b1,PodSandboxId:179e1cbc0b9cb7eb9b9f338b3f22ceeb08891407c365ac303f4dd9c6fed4143d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1758320665022788748,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd642869-ae38-45a0-ad01-05dcfff73601,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb8f63dfd7cc4378b1a91b1e35eef735c40ccd151cff64e84d
64aa5de2121aae,PodSandboxId:26fc8596bfe94da02bdc7e7484fa0c2ca0695b9450116772839e3a85a024b3ab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1758320664876226298,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gnc2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa84af28-0f0b-418e-9cd9-fa7851c365b3,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a41d060c833a02a6404078af8ea2569c9f77f65721fd004704b5b3e80d50bd61,PodSandboxId:eb5586f
63d27cbb5f06cca606652dc3863f5c2a62ae63f01b8466cdc4199c0f8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1758320664849656479,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67dcfe94bb7274dbb92a0f8ed062f93a,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePerio
d: 30,},},&Container{Id:10c598550cc09fff82686c0933621cfc45b23d13c3546265f21a53cd4c27111f,PodSandboxId:50d153de402bbaf6578165f458cd05f74f00bba57e724d93d9ac9e169199de44,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1758320664820547401,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12680b25168cedd6455f72c7610a1ddf,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9732f6304c59d282334ca0d28d637dd10341a1c86fd79880df7b7519a377526e,PodSandboxId:6467ca6fcd998e43c87b7a92b68a257970d17335a3ee9c72c87257f3a6f6c0c5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1758320628073012264,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-kcks9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d98d841-9889-4f5a-b46d-4c15a211c7e9,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=533c569f-e74d-49aa-a79c-0614611c05a4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 22:35:02 functional-351278 crio[4941]: time="2025-09-19 22:35:02.683096821Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1a536bb0-4af9-4e87-ac47-d2d0f3169427 name=/runtime.v1.RuntimeService/Version
	Sep 19 22:35:02 functional-351278 crio[4941]: time="2025-09-19 22:35:02.683210357Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1a536bb0-4af9-4e87-ac47-d2d0f3169427 name=/runtime.v1.RuntimeService/Version
	Sep 19 22:35:02 functional-351278 crio[4941]: time="2025-09-19 22:35:02.684983997Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=818447d0-017c-4e95-9cc7-527a170a6a1b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 22:35:02 functional-351278 crio[4941]: time="2025-09-19 22:35:02.685745895Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758321302685719446,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:175576,},InodesUsed:&UInt64Value{Value:87,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=818447d0-017c-4e95-9cc7-527a170a6a1b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 22:35:02 functional-351278 crio[4941]: time="2025-09-19 22:35:02.686483047Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2edb1207-3b5d-4c80-8c79-853f89ede1ed name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 22:35:02 functional-351278 crio[4941]: time="2025-09-19 22:35:02.686555635Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2edb1207-3b5d-4c80-8c79-853f89ede1ed name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 22:35:02 functional-351278 crio[4941]: time="2025-09-19 22:35:02.687067415Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5a787edfb8f9c734d3f5475054443381d7a2a79092acbf9a206f87afe5b0a0c5,PodSandboxId:06fdd7820d034f4ae92e5d2d96796018617b28fd316023a16e12d5852a4a5a3f,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1758321162451285058,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dc475075-c430-4387-b9b0-f728391024f1,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f830720c30812bcf9aa33ac20c632924b1c7120a65fd62b5dcfd937a7ae0e617,PodSandboxId:179e1cbc0b9cb7eb9b9f338b3f22ceeb08891407c365ac303f4dd9c6fed4143d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1758320675216064724,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd642869-ae38-45a0-ad01-05dcfff73601,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72cb2cb146fe5847510292057e63e40a96fd64e4f4a76d8696090e6a48ed4351,PodSandboxId:26fc8596bfe94da02bdc7e7484fa0c2ca0695b9450116772839e3a85a024b3ab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1758320675254838006,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gnc2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa84af28-0f0b-418e-9cd9-fa7851c365b3,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6483900c50ab2f70850376c9f661c10fc004d33319d8bf76f53dbc28e3259c8b,PodSandboxId:d7316350ed4bf51f55c971d0eed2f6c875ccfdd174aca0edfa500a9128027f8f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1758320670920170425,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb349a09d58d71c20ea5d2b4f09b994c,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protoc
ol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca9ceb7aad18b856b069a53fa36528acb133c0db8d9b22639ad02163b2a8d256,PodSandboxId:dfc27b667d8fb9502bb99e124254f314b5ede7a16da906e21922b48b9813d110,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1758320670593619725,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 820510e9b04676b04325640c25f4a8e8,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.port
s: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:300bb2063f5a43edc6035ff49f65d585ce99e18457f8ac56a6d29b175cf12e8e,PodSandboxId:50d153de402bbaf6578165f458cd05f74f00bba57e724d93d9ac9e169199de44,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1758320670610434339,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 12680b25168cedd6455f72c7610a1ddf,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:654bd01629fdeca3657986cd26bfffe3f8a55afa36789f47b07bc6d11d84a780,PodSandboxId:eb5586f63d27cbb5f06cca606652dc3863f5c2a62ae63f01b8466cdc4199c0f8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1758320670574957041,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kuber
netes.pod.name: kube-scheduler-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67dcfe94bb7274dbb92a0f8ed062f93a,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d062b037dd6fb189144a98d2de6f1c0d776f2e6505f52278691cb46e10a23d74,PodSandboxId:16b9b7066729ddc70087ac98b62dbfca1d172e0e6e3fbdbcd4b64c1188c97b60,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,Creat
edAt:1758320666084991194,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-kcks9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d98d841-9889-4f5a-b46d-4c15a211c7e9,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48123667739fa5c6e4b0d2d5215d09e671990614ccd0d3d4fc54cf1ff733e76f,PodSandboxId:dfc27b667d8fb9502bb99e124254f314b5ed
e7a16da906e21922b48b9813d110,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1758320665174114998,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 820510e9b04676b04325640c25f4a8e8,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18c2d509c5728590a77ee037c7f0994bafee3
5e8bf16b51258dee0762a0a53b1,PodSandboxId:179e1cbc0b9cb7eb9b9f338b3f22ceeb08891407c365ac303f4dd9c6fed4143d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1758320665022788748,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd642869-ae38-45a0-ad01-05dcfff73601,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb8f63dfd7cc4378b1a91b1e35eef735c40ccd151cff64e84d
64aa5de2121aae,PodSandboxId:26fc8596bfe94da02bdc7e7484fa0c2ca0695b9450116772839e3a85a024b3ab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1758320664876226298,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gnc2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa84af28-0f0b-418e-9cd9-fa7851c365b3,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a41d060c833a02a6404078af8ea2569c9f77f65721fd004704b5b3e80d50bd61,PodSandboxId:eb5586f
63d27cbb5f06cca606652dc3863f5c2a62ae63f01b8466cdc4199c0f8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1758320664849656479,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67dcfe94bb7274dbb92a0f8ed062f93a,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePerio
d: 30,},},&Container{Id:10c598550cc09fff82686c0933621cfc45b23d13c3546265f21a53cd4c27111f,PodSandboxId:50d153de402bbaf6578165f458cd05f74f00bba57e724d93d9ac9e169199de44,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1758320664820547401,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12680b25168cedd6455f72c7610a1ddf,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9732f6304c59d282334ca0d28d637dd10341a1c86fd79880df7b7519a377526e,PodSandboxId:6467ca6fcd998e43c87b7a92b68a257970d17335a3ee9c72c87257f3a6f6c0c5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1758320628073012264,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-kcks9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d98d841-9889-4f5a-b46d-4c15a211c7e9,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2edb1207-3b5d-4c80-8c79-853f89ede1ed name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 22:35:02 functional-351278 crio[4941]: time="2025-09-19 22:35:02.739330528Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c9457e2c-d061-4acd-87fa-143f98acb0f1 name=/runtime.v1.RuntimeService/Version
	Sep 19 22:35:02 functional-351278 crio[4941]: time="2025-09-19 22:35:02.739607191Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c9457e2c-d061-4acd-87fa-143f98acb0f1 name=/runtime.v1.RuntimeService/Version
	Sep 19 22:35:02 functional-351278 crio[4941]: time="2025-09-19 22:35:02.741502448Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f557c05a-184f-4c03-8c65-29b0e77bc3eb name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 22:35:02 functional-351278 crio[4941]: time="2025-09-19 22:35:02.742335756Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758321302742308914,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:175576,},InodesUsed:&UInt64Value{Value:87,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f557c05a-184f-4c03-8c65-29b0e77bc3eb name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 22:35:02 functional-351278 crio[4941]: time="2025-09-19 22:35:02.743288945Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ebf82a5c-3740-4a34-a2db-28fdec4afeff name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 22:35:02 functional-351278 crio[4941]: time="2025-09-19 22:35:02.743365367Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ebf82a5c-3740-4a34-a2db-28fdec4afeff name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 22:35:02 functional-351278 crio[4941]: time="2025-09-19 22:35:02.743641985Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5a787edfb8f9c734d3f5475054443381d7a2a79092acbf9a206f87afe5b0a0c5,PodSandboxId:06fdd7820d034f4ae92e5d2d96796018617b28fd316023a16e12d5852a4a5a3f,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1758321162451285058,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dc475075-c430-4387-b9b0-f728391024f1,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f830720c30812bcf9aa33ac20c632924b1c7120a65fd62b5dcfd937a7ae0e617,PodSandboxId:179e1cbc0b9cb7eb9b9f338b3f22ceeb08891407c365ac303f4dd9c6fed4143d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1758320675216064724,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd642869-ae38-45a0-ad01-05dcfff73601,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72cb2cb146fe5847510292057e63e40a96fd64e4f4a76d8696090e6a48ed4351,PodSandboxId:26fc8596bfe94da02bdc7e7484fa0c2ca0695b9450116772839e3a85a024b3ab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1758320675254838006,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gnc2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa84af28-0f0b-418e-9cd9-fa7851c365b3,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6483900c50ab2f70850376c9f661c10fc004d33319d8bf76f53dbc28e3259c8b,PodSandboxId:d7316350ed4bf51f55c971d0eed2f6c875ccfdd174aca0edfa500a9128027f8f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1758320670920170425,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb349a09d58d71c20ea5d2b4f09b994c,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protoc
ol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca9ceb7aad18b856b069a53fa36528acb133c0db8d9b22639ad02163b2a8d256,PodSandboxId:dfc27b667d8fb9502bb99e124254f314b5ede7a16da906e21922b48b9813d110,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1758320670593619725,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 820510e9b04676b04325640c25f4a8e8,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.port
s: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:300bb2063f5a43edc6035ff49f65d585ce99e18457f8ac56a6d29b175cf12e8e,PodSandboxId:50d153de402bbaf6578165f458cd05f74f00bba57e724d93d9ac9e169199de44,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1758320670610434339,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 12680b25168cedd6455f72c7610a1ddf,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:654bd01629fdeca3657986cd26bfffe3f8a55afa36789f47b07bc6d11d84a780,PodSandboxId:eb5586f63d27cbb5f06cca606652dc3863f5c2a62ae63f01b8466cdc4199c0f8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1758320670574957041,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kuber
netes.pod.name: kube-scheduler-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67dcfe94bb7274dbb92a0f8ed062f93a,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d062b037dd6fb189144a98d2de6f1c0d776f2e6505f52278691cb46e10a23d74,PodSandboxId:16b9b7066729ddc70087ac98b62dbfca1d172e0e6e3fbdbcd4b64c1188c97b60,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,Creat
edAt:1758320666084991194,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-kcks9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d98d841-9889-4f5a-b46d-4c15a211c7e9,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48123667739fa5c6e4b0d2d5215d09e671990614ccd0d3d4fc54cf1ff733e76f,PodSandboxId:dfc27b667d8fb9502bb99e124254f314b5ed
e7a16da906e21922b48b9813d110,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1758320665174114998,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 820510e9b04676b04325640c25f4a8e8,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18c2d509c5728590a77ee037c7f0994bafee3
5e8bf16b51258dee0762a0a53b1,PodSandboxId:179e1cbc0b9cb7eb9b9f338b3f22ceeb08891407c365ac303f4dd9c6fed4143d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1758320665022788748,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd642869-ae38-45a0-ad01-05dcfff73601,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb8f63dfd7cc4378b1a91b1e35eef735c40ccd151cff64e84d
64aa5de2121aae,PodSandboxId:26fc8596bfe94da02bdc7e7484fa0c2ca0695b9450116772839e3a85a024b3ab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1758320664876226298,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gnc2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa84af28-0f0b-418e-9cd9-fa7851c365b3,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a41d060c833a02a6404078af8ea2569c9f77f65721fd004704b5b3e80d50bd61,PodSandboxId:eb5586f
63d27cbb5f06cca606652dc3863f5c2a62ae63f01b8466cdc4199c0f8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1758320664849656479,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67dcfe94bb7274dbb92a0f8ed062f93a,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePerio
d: 30,},},&Container{Id:10c598550cc09fff82686c0933621cfc45b23d13c3546265f21a53cd4c27111f,PodSandboxId:50d153de402bbaf6578165f458cd05f74f00bba57e724d93d9ac9e169199de44,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1758320664820547401,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12680b25168cedd6455f72c7610a1ddf,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9732f6304c59d282334ca0d28d637dd10341a1c86fd79880df7b7519a377526e,PodSandboxId:6467ca6fcd998e43c87b7a92b68a257970d17335a3ee9c72c87257f3a6f6c0c5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1758320628073012264,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-kcks9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d98d841-9889-4f5a-b46d-4c15a211c7e9,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ebf82a5c-3740-4a34-a2db-28fdec4afeff name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 22:35:02 functional-351278 crio[4941]: time="2025-09-19 22:35:02.794512292Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bbcf3eae-4ccb-44a5-b479-803f42b4daa4 name=/runtime.v1.RuntimeService/Version
	Sep 19 22:35:02 functional-351278 crio[4941]: time="2025-09-19 22:35:02.794603249Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bbcf3eae-4ccb-44a5-b479-803f42b4daa4 name=/runtime.v1.RuntimeService/Version
	Sep 19 22:35:02 functional-351278 crio[4941]: time="2025-09-19 22:35:02.797138761Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=543ceae6-268c-45cb-a558-50682be1b371 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 22:35:02 functional-351278 crio[4941]: time="2025-09-19 22:35:02.797813975Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758321302797784114,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:175576,},InodesUsed:&UInt64Value{Value:87,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=543ceae6-268c-45cb-a558-50682be1b371 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 22:35:02 functional-351278 crio[4941]: time="2025-09-19 22:35:02.798645594Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=683c7be8-096a-4af3-98a1-40c649d7cf09 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 22:35:02 functional-351278 crio[4941]: time="2025-09-19 22:35:02.798756428Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=683c7be8-096a-4af3-98a1-40c649d7cf09 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 22:35:02 functional-351278 crio[4941]: time="2025-09-19 22:35:02.799285739Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5a787edfb8f9c734d3f5475054443381d7a2a79092acbf9a206f87afe5b0a0c5,PodSandboxId:06fdd7820d034f4ae92e5d2d96796018617b28fd316023a16e12d5852a4a5a3f,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1758321162451285058,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dc475075-c430-4387-b9b0-f728391024f1,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f830720c30812bcf9aa33ac20c632924b1c7120a65fd62b5dcfd937a7ae0e617,PodSandboxId:179e1cbc0b9cb7eb9b9f338b3f22ceeb08891407c365ac303f4dd9c6fed4143d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1758320675216064724,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd642869-ae38-45a0-ad01-05dcfff73601,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72cb2cb146fe5847510292057e63e40a96fd64e4f4a76d8696090e6a48ed4351,PodSandboxId:26fc8596bfe94da02bdc7e7484fa0c2ca0695b9450116772839e3a85a024b3ab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1758320675254838006,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gnc2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa84af28-0f0b-418e-9cd9-fa7851c365b3,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6483900c50ab2f70850376c9f661c10fc004d33319d8bf76f53dbc28e3259c8b,PodSandboxId:d7316350ed4bf51f55c971d0eed2f6c875ccfdd174aca0edfa500a9128027f8f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1758320670920170425,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb349a09d58d71c20ea5d2b4f09b994c,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protoc
ol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca9ceb7aad18b856b069a53fa36528acb133c0db8d9b22639ad02163b2a8d256,PodSandboxId:dfc27b667d8fb9502bb99e124254f314b5ede7a16da906e21922b48b9813d110,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1758320670593619725,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 820510e9b04676b04325640c25f4a8e8,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.port
s: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:300bb2063f5a43edc6035ff49f65d585ce99e18457f8ac56a6d29b175cf12e8e,PodSandboxId:50d153de402bbaf6578165f458cd05f74f00bba57e724d93d9ac9e169199de44,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1758320670610434339,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 12680b25168cedd6455f72c7610a1ddf,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:654bd01629fdeca3657986cd26bfffe3f8a55afa36789f47b07bc6d11d84a780,PodSandboxId:eb5586f63d27cbb5f06cca606652dc3863f5c2a62ae63f01b8466cdc4199c0f8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1758320670574957041,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kuber
netes.pod.name: kube-scheduler-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67dcfe94bb7274dbb92a0f8ed062f93a,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d062b037dd6fb189144a98d2de6f1c0d776f2e6505f52278691cb46e10a23d74,PodSandboxId:16b9b7066729ddc70087ac98b62dbfca1d172e0e6e3fbdbcd4b64c1188c97b60,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,Creat
edAt:1758320666084991194,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-kcks9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d98d841-9889-4f5a-b46d-4c15a211c7e9,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48123667739fa5c6e4b0d2d5215d09e671990614ccd0d3d4fc54cf1ff733e76f,PodSandboxId:dfc27b667d8fb9502bb99e124254f314b5ed
e7a16da906e21922b48b9813d110,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1758320665174114998,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 820510e9b04676b04325640c25f4a8e8,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18c2d509c5728590a77ee037c7f0994bafee3
5e8bf16b51258dee0762a0a53b1,PodSandboxId:179e1cbc0b9cb7eb9b9f338b3f22ceeb08891407c365ac303f4dd9c6fed4143d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1758320665022788748,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd642869-ae38-45a0-ad01-05dcfff73601,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb8f63dfd7cc4378b1a91b1e35eef735c40ccd151cff64e84d
64aa5de2121aae,PodSandboxId:26fc8596bfe94da02bdc7e7484fa0c2ca0695b9450116772839e3a85a024b3ab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1758320664876226298,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gnc2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa84af28-0f0b-418e-9cd9-fa7851c365b3,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a41d060c833a02a6404078af8ea2569c9f77f65721fd004704b5b3e80d50bd61,PodSandboxId:eb5586f
63d27cbb5f06cca606652dc3863f5c2a62ae63f01b8466cdc4199c0f8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1758320664849656479,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67dcfe94bb7274dbb92a0f8ed062f93a,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePerio
d: 30,},},&Container{Id:10c598550cc09fff82686c0933621cfc45b23d13c3546265f21a53cd4c27111f,PodSandboxId:50d153de402bbaf6578165f458cd05f74f00bba57e724d93d9ac9e169199de44,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1758320664820547401,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12680b25168cedd6455f72c7610a1ddf,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9732f6304c59d282334ca0d28d637dd10341a1c86fd79880df7b7519a377526e,PodSandboxId:6467ca6fcd998e43c87b7a92b68a257970d17335a3ee9c72c87257f3a6f6c0c5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1758320628073012264,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-kcks9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d98d841-9889-4f5a-b46d-4c15a211c7e9,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=683c7be8-096a-4af3-98a1-40c649d7cf09 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5a787edfb8f9c       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   2 minutes ago       Exited              mount-munger              0                   06fdd7820d034       busybox-mount
	72cb2cb146fe5       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                      10 minutes ago      Running             kube-proxy                3                   26fc8596bfe94       kube-proxy-gnc2m
	f830720c30812       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Running             storage-provisioner       4                   179e1cbc0b9cb       storage-provisioner
	6483900c50ab2       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                      10 minutes ago      Running             kube-apiserver            0                   d7316350ed4bf       kube-apiserver-functional-351278
	300bb2063f5a4       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                      10 minutes ago      Running             kube-controller-manager   3                   50d153de402bb       kube-controller-manager-functional-351278
	ca9ceb7aad18b       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      10 minutes ago      Running             etcd                      3                   dfc27b667d8fb       etcd-functional-351278
	654bd01629fde       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                      10 minutes ago      Running             kube-scheduler            3                   eb5586f63d27c       kube-scheduler-functional-351278
	d062b037dd6fb       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      10 minutes ago      Running             coredns                   2                   16b9b7066729d       coredns-66bc5c9577-kcks9
	48123667739fa       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      10 minutes ago      Exited              etcd                      2                   dfc27b667d8fb       etcd-functional-351278
	18c2d509c5728       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       3                   179e1cbc0b9cb       storage-provisioner
	eb8f63dfd7cc4       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                      10 minutes ago      Exited              kube-proxy                2                   26fc8596bfe94       kube-proxy-gnc2m
	a41d060c833a0       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                      10 minutes ago      Exited              kube-scheduler            2                   eb5586f63d27c       kube-scheduler-functional-351278
	10c598550cc09       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                      10 minutes ago      Exited              kube-controller-manager   2                   50d153de402bb       kube-controller-manager-functional-351278
	9732f6304c59d       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 minutes ago      Exited              coredns                   1                   6467ca6fcd998       coredns-66bc5c9577-kcks9
	
	
	==> coredns [9732f6304c59d282334ca0d28d637dd10341a1c86fd79880df7b7519a377526e] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51843 - 42718 "HINFO IN 3262367968470923463.4546114080195651747. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.087807319s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [d062b037dd6fb189144a98d2de6f1c0d776f2e6505f52278691cb46e10a23d74] <==
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37152 - 37023 "HINFO IN 8047511030750463407.6215404466120594246. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.460804898s
	
	
	==> describe nodes <==
	Name:               functional-351278
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-351278
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=functional-351278
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_19T22_22_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:22:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-351278
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:34:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:33:04 +0000   Fri, 19 Sep 2025 22:22:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:33:04 +0000   Fri, 19 Sep 2025 22:22:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:33:04 +0000   Fri, 19 Sep 2025 22:22:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:33:04 +0000   Fri, 19 Sep 2025 22:22:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.95
	  Hostname:    functional-351278
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008584Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008584Ki
	  pods:               110
	System Info:
	  Machine ID:                 e9fbec12c9cc4da5a4819f98a915eadd
	  System UUID:                e9fbec12-c9cc-4da5-a481-9f98a915eadd
	  Boot ID:                    0ec8fcd8-9c60-4abf-860c-295d8944fa7f
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-hxq2h                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m58s
	  default                     hello-node-connect-7d85dfc575-47ht6           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-2hghz                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m57s
	  kube-system                 coredns-66bc5c9577-kcks9                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 etcd-functional-351278                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-functional-351278              250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-351278     200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-gnc2m                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-351278              100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-wzzm4    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m12s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-htp94         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node functional-351278 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node functional-351278 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node functional-351278 status is now: NodeHasSufficientPID
	  Normal  NodeReady                12m                kubelet          Node functional-351278 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           12m                node-controller  Node functional-351278 event: Registered Node functional-351278 in Controller
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node functional-351278 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-351278 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-351278 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           11m                node-controller  Node functional-351278 event: Registered Node functional-351278 in Controller
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-351278 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-351278 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node functional-351278 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           10m                node-controller  Node functional-351278 event: Registered Node functional-351278 in Controller
	
	
	==> dmesg <==
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000048] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.000519] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.191929] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000020] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.089286] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.095437] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.128311] kauditd_printk_skb: 171 callbacks suppressed
	[  +1.053733] kauditd_printk_skb: 18 callbacks suppressed
	[Sep19 22:23] kauditd_printk_skb: 220 callbacks suppressed
	[  +0.109627] kauditd_printk_skb: 11 callbacks suppressed
	[  +4.558363] kauditd_printk_skb: 243 callbacks suppressed
	[Sep19 22:24] kauditd_printk_skb: 137 callbacks suppressed
	[  +0.111375] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.310472] kauditd_printk_skb: 78 callbacks suppressed
	[  +0.126850] kauditd_printk_skb: 272 callbacks suppressed
	[  +5.561032] kauditd_printk_skb: 102 callbacks suppressed
	[  +4.743320] kauditd_printk_skb: 62 callbacks suppressed
	[  +0.048897] kauditd_printk_skb: 90 callbacks suppressed
	[Sep19 22:25] kauditd_printk_skb: 65 callbacks suppressed
	[ +24.004450] kauditd_printk_skb: 74 callbacks suppressed
	[Sep19 22:32] kauditd_printk_skb: 26 callbacks suppressed
	[  +7.450214] kauditd_printk_skb: 25 callbacks suppressed
	[Sep19 22:35] crun[9389]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	
	
	==> etcd [48123667739fa5c6e4b0d2d5215d09e671990614ccd0d3d4fc54cf1ff733e76f] <==
	{"level":"warn","ts":"2025-09-19T22:24:26.076069Z","caller":"embed/config.go:1209","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2025-09-19T22:24:26.076081Z","caller":"embed/etcd.go:138","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.39.95:2380"]}
	{"level":"info","ts":"2025-09-19T22:24:26.076113Z","caller":"embed/etcd.go:544","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-09-19T22:24:26.076644Z","caller":"embed/etcd.go:146","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.95:2379"]}
	{"level":"info","ts":"2025-09-19T22:24:26.076833Z","caller":"embed/etcd.go:323","msg":"starting an etcd server","etcd-version":"3.6.4","git-sha":"5400cdc","go-version":"go1.23.11","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"functional-351278","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.39.95:2380"],"listen-peer-urls":["https://192.168.39.95:2380"],"advertise-client-urls":["https://192.168.39.95:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.95:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"experimental-local-address":"","cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cl
uster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"feature-gates":"InitialCorruptCheck=true","initial-corrupt-check":false,"corrupt-check-time-interval":"0s","compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","discovery-token":"","discovery-endpoints":"","discovery-dial-timeout":"2s","discovery-request-timeout":"5s","discovery-keepalive-time":"2s","discovery-keepalive-timeout":"6s","discovery-insecure-transport":true,"discovery-insecure-skip-tls-verify":false,"discovery-cert":"","discovery-key":"","discovery-cacert":"","discovery-user":"","downgrade-check-interval":"5s","max-learners":1,"v2-deprecation":"write-only"}
	{"level":"info","ts":"2025-09-19T22:24:26.077611Z","logger":"bbolt","caller":"backend/backend.go:203","msg":"Opening db file (/var/lib/minikube/etcd/member/snap/db) with mode -rw------- and with options: {Timeout: 0s, NoGrowSync: false, NoFreelistSync: true, PreLoadFreelist: false, FreelistType: hashmap, ReadOnly: false, MmapFlags: 8000, InitialMmapSize: 10737418240, PageSize: 0, NoSync: false, OpenFile: 0x0, Mlock: false, Logger: 0xc000121390}"}
	{"level":"info","ts":"2025-09-19T22:24:26.095385Z","logger":"bbolt","caller":"bbolt@v1.4.2/db.go:321","msg":"Opening bbolt db (/var/lib/minikube/etcd/member/snap/db) successfully"}
	{"level":"info","ts":"2025-09-19T22:24:26.095456Z","caller":"storage/backend.go:80","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"17.930871ms"}
	{"level":"info","ts":"2025-09-19T22:24:26.095504Z","caller":"etcdserver/bootstrap.go:220","msg":"restore consistentIndex","index":567}
	{"level":"info","ts":"2025-09-19T22:24:26.119624Z","caller":"etcdserver/bootstrap.go:441","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2025-09-19T22:24:26.122969Z","caller":"etcdserver/bootstrap.go:232","msg":"recovered v3 backend","backend-size-bytes":1179648,"backend-size":"1.2 MB","backend-size-in-use-bytes":1134592,"backend-size-in-use":"1.1 MB"}
	{"level":"info","ts":"2025-09-19T22:24:26.123965Z","caller":"etcdserver/bootstrap.go:90","msg":"Bootstrapping WAL from snapshot"}
	{"level":"info","ts":"2025-09-19T22:24:26.160224Z","caller":"etcdserver/bootstrap.go:599","msg":"restarting local member","cluster-id":"986e33f48d4d13ba","local-member-id":"a71e7bac075997","commit-index":567}
	{"level":"info","ts":"2025-09-19T22:24:26.162606Z","caller":"etcdserver/bootstrap.go:94","msg":"bootstrapping cluster"}
	{"level":"info","ts":"2025-09-19T22:24:26.164048Z","caller":"etcdserver/bootstrap.go:101","msg":"bootstrapping storage"}
	{"level":"info","ts":"2025-09-19T22:24:26.175855Z","caller":"membership/cluster.go:605","msg":"Detected member only in v3store but missing in v2store","member":"{ID:a71e7bac075997 RaftAttributes:{PeerURLs:[https://192.168.39.95:2380] IsLearner:false} Attributes:{Name:functional-351278 ClientURLs:[https://192.168.39.95:2379]}}"}
	{"level":"info","ts":"2025-09-19T22:24:26.178255Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2025-09-19T22:24:26.178279Z","caller":"membership/cluster.go:297","msg":"recovered/added member from store","cluster-id":"986e33f48d4d13ba","local-member-id":"a71e7bac075997","recovered-remote-peer-id":"a71e7bac075997","recovered-remote-peer-urls":["https://192.168.39.95:2380"],"recovered-remote-peer-is-learner":false}
	{"level":"info","ts":"2025-09-19T22:24:26.178430Z","caller":"membership/cluster.go:307","msg":"set cluster version from store","cluster-version":"3.6"}
	{"level":"info","ts":"2025-09-19T22:24:26.178446Z","caller":"etcdserver/bootstrap.go:109","msg":"bootstrapping raft"}
	{"level":"info","ts":"2025-09-19T22:24:26.181063Z","caller":"etcdserver/server.go:312","msg":"bootstrap successfully"}
	{"level":"info","ts":"2025-09-19T22:24:26.182197Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"a71e7bac075997 switched to configuration voters=()"}
	{"level":"info","ts":"2025-09-19T22:24:26.182232Z","logger":"raft","caller":"v3@v3.6.0/raft.go:897","msg":"a71e7bac075997 became follower at term 3"}
	{"level":"info","ts":"2025-09-19T22:24:26.182240Z","logger":"raft","caller":"v3@v3.6.0/raft.go:493","msg":"newRaft a71e7bac075997 [peers: [], term: 3, commit: 567, applied: 0, lastindex: 567, lastterm: 3]"}
	{"level":"warn","ts":"2025-09-19T22:24:26.193088Z","caller":"auth/store.go:1135","msg":"simple token is not cryptographically signed"}
	
	
	==> etcd [ca9ceb7aad18b856b069a53fa36528acb133c0db8d9b22639ad02163b2a8d256] <==
	{"level":"warn","ts":"2025-09-19T22:24:32.996830Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:33.005341Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:33.017665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:33.035124Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:33.045969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:33.059781Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:33.082181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:33.102768Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:33.122632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:33.133996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:33.148518Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:33.159167Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:33.173818Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:33.188995Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:33.203658Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:33.215842Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:33.227397Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:33.262460Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:33.272505Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:33.305766Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:33.321449Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:33.361840Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56662","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-19T22:34:32.200434Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":985}
	{"level":"info","ts":"2025-09-19T22:34:32.216240Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":985,"took":"15.353883ms","hash":2834440057,"current-db-size-bytes":3338240,"current-db-size":"3.3 MB","current-db-size-in-use-bytes":3338240,"current-db-size-in-use":"3.3 MB"}
	{"level":"info","ts":"2025-09-19T22:34:32.216282Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2834440057,"revision":985,"compact-revision":-1}
	
	
	==> kernel <==
	 22:35:03 up 12 min,  0 users,  load average: 0.54, 0.46, 0.27
	Linux functional-351278 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Sep  9 02:24:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [6483900c50ab2f70850376c9f661c10fc004d33319d8bf76f53dbc28e3259c8b] <==
	I0919 22:24:39.692574       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0919 22:24:53.807975       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.101.225.20"}
	I0919 22:24:58.319626       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.106.163.45"}
	I0919 22:25:01.118840       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.102.109.72"}
	I0919 22:25:05.132702       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.99.132.69"}
	I0919 22:25:42.529501       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:25:57.346320       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:27:04.152951       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:27:06.674054       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:28:11.739177       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:28:34.539000       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:29:14.651185       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:29:40.362928       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:30:20.976132       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:30:47.885983       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:31:22.666809       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:31:47.895976       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:32:48.070414       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:32:51.478440       1 controller.go:667] quota admission added evaluator for: namespaces
	I0919 22:32:51.759720       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.83.102"}
	I0919 22:32:51.798748       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.48.82"}
	I0919 22:33:10.825978       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:34:07.268548       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:34:16.923720       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:34:34.169714       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [10c598550cc09fff82686c0933621cfc45b23d13c3546265f21a53cd4c27111f] <==
	I0919 22:24:26.606554       1 serving.go:386] Generated self-signed cert in-memory
	I0919 22:24:27.254668       1 controllermanager.go:191] "Starting" version="v1.34.0"
	I0919 22:24:27.254704       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 22:24:27.257310       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0919 22:24:27.257444       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0919 22:24:27.257720       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I0919 22:24:27.257745       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	
	
	==> kube-controller-manager [300bb2063f5a43edc6035ff49f65d585ce99e18457f8ac56a6d29b175cf12e8e] <==
	I0919 22:24:37.531489       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0919 22:24:37.531599       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0919 22:24:37.531619       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0919 22:24:37.538613       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0919 22:24:37.539827       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0919 22:24:37.544369       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0919 22:24:37.544487       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0919 22:24:37.544655       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0919 22:24:37.544722       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-351278"
	I0919 22:24:37.544777       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0919 22:24:37.545184       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0919 22:24:37.547281       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0919 22:24:37.550109       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0919 22:24:37.551543       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0919 22:24:37.559825       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0919 22:24:37.566126       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	E0919 22:32:51.577007       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0919 22:32:51.589036       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0919 22:32:51.610193       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0919 22:32:51.616342       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0919 22:32:51.628208       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0919 22:32:51.629041       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0919 22:32:51.639852       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0919 22:32:51.641267       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0919 22:32:51.647802       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [72cb2cb146fe5847510292057e63e40a96fd64e4f4a76d8696090e6a48ed4351] <==
	I0919 22:24:35.554555       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0919 22:24:35.656289       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0919 22:24:35.656600       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.95"]
	E0919 22:24:35.656975       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 22:24:35.712360       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0919 22:24:35.712449       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0919 22:24:35.712475       1 server_linux.go:132] "Using iptables Proxier"
	I0919 22:24:35.734784       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 22:24:35.735228       1 server.go:527] "Version info" version="v1.34.0"
	I0919 22:24:35.735258       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 22:24:35.737742       1 config.go:106] "Starting endpoint slice config controller"
	I0919 22:24:35.737851       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0919 22:24:35.743832       1 config.go:200] "Starting service config controller"
	I0919 22:24:35.744069       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0919 22:24:35.744808       1 config.go:403] "Starting serviceCIDR config controller"
	I0919 22:24:35.744830       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0919 22:24:35.745343       1 config.go:309] "Starting node config controller"
	I0919 22:24:35.745354       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0919 22:24:35.745359       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0919 22:24:35.838538       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0919 22:24:35.844699       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0919 22:24:35.845223       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [eb8f63dfd7cc4378b1a91b1e35eef735c40ccd151cff64e84d64aa5de2121aae] <==
	I0919 22:24:25.364860       1 server_linux.go:53] "Using iptables proxy"
	I0919 22:24:25.532215       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E0919 22:24:25.538045       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-351278&limit=500&resourceVersion=0\": dial tcp 192.168.39.95:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	
	
	==> kube-scheduler [654bd01629fdeca3657986cd26bfffe3f8a55afa36789f47b07bc6d11d84a780] <==
	I0919 22:24:33.397444       1 serving.go:386] Generated self-signed cert in-memory
	I0919 22:24:34.211621       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0919 22:24:34.211706       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 22:24:34.220243       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0919 22:24:34.220330       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I0919 22:24:34.220402       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 22:24:34.220431       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 22:24:34.220457       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0919 22:24:34.220473       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0919 22:24:34.220485       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0919 22:24:34.220560       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0919 22:24:34.321480       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0919 22:24:34.321537       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I0919 22:24:34.321583       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [a41d060c833a02a6404078af8ea2569c9f77f65721fd004704b5b3e80d50bd61] <==
	I0919 22:24:26.731449       1 serving.go:386] Generated self-signed cert in-memory
	W0919 22:24:27.211844       1 authentication.go:397] Error looking up in-cluster authentication configuration: Get "https://192.168.39.95:8441/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 192.168.39.95:8441: connect: connection refused
	W0919 22:24:27.212028       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0919 22:24:27.212120       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0919 22:24:27.235673       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0919 22:24:27.235715       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E0919 22:24:27.235731       1 event.go:401] "Unable start event watcher (will not retry!)" err="broadcaster already stopped"
	I0919 22:24:27.237834       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0919 22:24:27.238080       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0919 22:24:27.238148       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 22:24:27.238179       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0919 22:24:27.238622       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.39.95:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.95:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0919 22:24:27.238839       1 server.go:286] "handlers are not fully synchronized" err="context canceled"
	E0919 22:24:27.239533       1 shared_informer.go:352] "Unable to sync caches" logger="UnhandledError" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 22:24:27.239623       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0919 22:24:27.239664       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0919 22:24:27.239699       1 configmap_cafile_content.go:213] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 22:24:27.239717       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0919 22:24:27.239793       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0919 22:24:27.239824       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 19 22:34:07 functional-351278 kubelet[6067]: E0919 22:34:07.953859    6067 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-47ht6" podUID="3af4e468-7894-42f7-8bf3-23cebdaddd0c"
	Sep 19 22:34:10 functional-351278 kubelet[6067]: E0919 22:34:10.231736    6067 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321250231043474  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175576}  inodes_used:{value:87}}"
	Sep 19 22:34:10 functional-351278 kubelet[6067]: E0919 22:34:10.231756    6067 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321250231043474  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175576}  inodes_used:{value:87}}"
	Sep 19 22:34:17 functional-351278 kubelet[6067]: E0919 22:34:17.955834    6067 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-2hghz" podUID="e83df643-ba88-420d-ade1-6c4a474cd6fd"
	Sep 19 22:34:18 functional-351278 kubelet[6067]: E0919 22:34:18.952788    6067 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-47ht6" podUID="3af4e468-7894-42f7-8bf3-23cebdaddd0c"
	Sep 19 22:34:20 functional-351278 kubelet[6067]: E0919 22:34:20.233512    6067 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321260233210156  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175576}  inodes_used:{value:87}}"
	Sep 19 22:34:20 functional-351278 kubelet[6067]: E0919 22:34:20.233553    6067 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321260233210156  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175576}  inodes_used:{value:87}}"
	Sep 19 22:34:29 functional-351278 kubelet[6067]: E0919 22:34:29.957708    6067 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-47ht6" podUID="3af4e468-7894-42f7-8bf3-23cebdaddd0c"
	Sep 19 22:34:29 functional-351278 kubelet[6067]: E0919 22:34:29.960296    6067 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-2hghz" podUID="e83df643-ba88-420d-ade1-6c4a474cd6fd"
	Sep 19 22:34:30 functional-351278 kubelet[6067]: E0919 22:34:30.063808    6067 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod1d98d841-9889-4f5a-b46d-4c15a211c7e9/crio-6467ca6fcd998e43c87b7a92b68a257970d17335a3ee9c72c87257f3a6f6c0c5: Error finding container 6467ca6fcd998e43c87b7a92b68a257970d17335a3ee9c72c87257f3a6f6c0c5: Status 404 returned error can't find the container with id 6467ca6fcd998e43c87b7a92b68a257970d17335a3ee9c72c87257f3a6f6c0c5
	Sep 19 22:34:30 functional-351278 kubelet[6067]: E0919 22:34:30.235516    6067 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321270234655293  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175576}  inodes_used:{value:87}}"
	Sep 19 22:34:30 functional-351278 kubelet[6067]: E0919 22:34:30.235541    6067 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321270234655293  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175576}  inodes_used:{value:87}}"
	Sep 19 22:34:40 functional-351278 kubelet[6067]: E0919 22:34:40.237340    6067 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321280236808475  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175576}  inodes_used:{value:87}}"
	Sep 19 22:34:40 functional-351278 kubelet[6067]: E0919 22:34:40.237365    6067 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321280236808475  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175576}  inodes_used:{value:87}}"
	Sep 19 22:34:41 functional-351278 kubelet[6067]: E0919 22:34:41.953566    6067 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-47ht6" podUID="3af4e468-7894-42f7-8bf3-23cebdaddd0c"
	Sep 19 22:34:44 functional-351278 kubelet[6067]: E0919 22:34:44.858380    6067 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Sep 19 22:34:44 functional-351278 kubelet[6067]: E0919 22:34:44.858464    6067 kuberuntime_image.go:43] "Failed to pull image" err="fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Sep 19 22:34:44 functional-351278 kubelet[6067]: E0919 22:34:44.858751    6067 kuberuntime_manager.go:1449] "Unhandled Error" err="container echo-server start failed in pod hello-node-75c85bcc94-hxq2h_default(10da76bb-ac93-4179-85ff-f400a8350ff2): ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 19 22:34:44 functional-351278 kubelet[6067]: E0919 22:34:44.858789    6067 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-hxq2h" podUID="10da76bb-ac93-4179-85ff-f400a8350ff2"
	Sep 19 22:34:50 functional-351278 kubelet[6067]: E0919 22:34:50.238758    6067 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321290238335088  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175576}  inodes_used:{value:87}}"
	Sep 19 22:34:50 functional-351278 kubelet[6067]: E0919 22:34:50.238803    6067 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321290238335088  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175576}  inodes_used:{value:87}}"
	Sep 19 22:34:52 functional-351278 kubelet[6067]: E0919 22:34:52.953557    6067 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-47ht6" podUID="3af4e468-7894-42f7-8bf3-23cebdaddd0c"
	Sep 19 22:34:55 functional-351278 kubelet[6067]: E0919 22:34:55.955507    6067 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-hxq2h" podUID="10da76bb-ac93-4179-85ff-f400a8350ff2"
	Sep 19 22:35:00 functional-351278 kubelet[6067]: E0919 22:35:00.240618    6067 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321300240221582  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175576}  inodes_used:{value:87}}"
	Sep 19 22:35:00 functional-351278 kubelet[6067]: E0919 22:35:00.240662    6067 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321300240221582  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175576}  inodes_used:{value:87}}"
	
	
	==> storage-provisioner [18c2d509c5728590a77ee037c7f0994bafee35e8bf16b51258dee0762a0a53b1] <==
	I0919 22:24:25.563398       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0919 22:24:25.566735       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [f830720c30812bcf9aa33ac20c632924b1c7120a65fd62b5dcfd937a7ae0e617] <==
	W0919 22:34:38.249442       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:34:40.253422       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:34:40.262801       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:34:42.266612       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:34:42.275747       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:34:44.280040       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:34:44.286852       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:34:46.291018       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:34:46.296225       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:34:48.299843       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:34:48.309830       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:34:50.313669       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:34:50.318804       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:34:52.322832       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:34:52.329843       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:34:54.334250       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:34:54.339754       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:34:56.343482       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:34:56.349052       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:34:58.352648       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:34:58.361999       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:35:00.370317       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:35:00.381859       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:35:02.385079       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:35:02.395776       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-351278 -n functional-351278
helpers_test.go:269: (dbg) Run:  kubectl --context functional-351278 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-hxq2h hello-node-connect-7d85dfc575-47ht6 mysql-5bb876957f-2hghz sp-pod dashboard-metrics-scraper-77bf4d6c4c-wzzm4 kubernetes-dashboard-855c9754f9-htp94
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-351278 describe pod busybox-mount hello-node-75c85bcc94-hxq2h hello-node-connect-7d85dfc575-47ht6 mysql-5bb876957f-2hghz sp-pod dashboard-metrics-scraper-77bf4d6c4c-wzzm4 kubernetes-dashboard-855c9754f9-htp94
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-351278 describe pod busybox-mount hello-node-75c85bcc94-hxq2h hello-node-connect-7d85dfc575-47ht6 mysql-5bb876957f-2hghz sp-pod dashboard-metrics-scraper-77bf4d6c4c-wzzm4 kubernetes-dashboard-855c9754f9-htp94: exit status 1 (171.465284ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-351278/192.168.39.95
	Start Time:       Fri, 19 Sep 2025 22:31:10 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  mount-munger:
	    Container ID:  cri-o://5a787edfb8f9c734d3f5475054443381d7a2a79092acbf9a206f87afe5b0a0c5
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Fri, 19 Sep 2025 22:32:42 +0000
	      Finished:     Fri, 19 Sep 2025 22:32:42 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-n59bf (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-n59bf:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  3m53s  default-scheduler  Successfully assigned default/busybox-mount to functional-351278
	  Normal  Pulling    3m54s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     2m22s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.33s (1m31.478s including waiting). Image size: 4631262 bytes.
	  Normal  Created    2m22s  kubelet            Created container: mount-munger
	  Normal  Started    2m22s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-hxq2h
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-351278/192.168.39.95
	Start Time:       Fri, 19 Sep 2025 22:25:05 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6jtdq (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-6jtdq:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m59s                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-hxq2h to functional-351278
	  Warning  Failed     3m25s (x3 over 8m2s)   kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    2m34s (x4 over 9m59s)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     20s (x4 over 8m2s)     kubelet            Error: ErrImagePull
	  Warning  Failed     20s                    kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    9s (x6 over 8m2s)      kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     9s (x6 over 8m2s)      kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-47ht6
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-351278/192.168.39.95
	Start Time:       Fri, 19 Sep 2025 22:25:01 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:           10.244.0.7
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-s6kzd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-s6kzd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-47ht6 to functional-351278
	  Warning  Failed     8m33s                 kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    3m6s (x4 over 10m)    kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     81s (x4 over 8m33s)   kubelet            Error: ErrImagePull
	  Warning  Failed     81s (x3 over 6m)      kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    12s (x11 over 8m33s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     12s (x11 over 8m33s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             mysql-5bb876957f-2hghz
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-351278/192.168.39.95
	Start Time:       Fri, 19 Sep 2025 22:24:58 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vjr59 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-vjr59:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/mysql-5bb876957f-2hghz to functional-351278
	  Warning  Failed     9m34s                  kubelet            Failed to pull image "docker.io/mysql:5.7": copying system image from manifest list: determining manifest MIME type for docker://mysql:5.7: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     4m26s (x2 over 6m31s)  kubelet            Failed to pull image "docker.io/mysql:5.7": fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     111s (x4 over 9m34s)   kubelet            Error: ErrImagePull
	  Warning  Failed     111s                   kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    35s (x11 over 9m34s)   kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     35s (x11 over 9m34s)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling    23s (x5 over 10m)      kubelet            Pulling image "docker.io/mysql:5.7"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-351278/192.168.39.95
	Start Time:       Fri, 19 Sep 2025 22:25:06 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zqhg9 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-zqhg9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m58s                  default-scheduler  Successfully assigned default/sp-pod to functional-351278
	  Warning  Failed     4m59s (x2 over 7m32s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m23s (x3 over 7m32s)  kubelet            Error: ErrImagePull
	  Warning  Failed     2m23s                  kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:27637a97e3d1d0518adc2a877b60db3779970f19474b6e586ddcbc2d5500e285 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    118s (x4 over 7m31s)   kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     118s (x4 over 7m31s)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling    103s (x4 over 9m58s)   kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-wzzm4" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-htp94" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-351278 describe pod busybox-mount hello-node-75c85bcc94-hxq2h hello-node-connect-7d85dfc575-47ht6 mysql-5bb876957f-2hghz sp-pod dashboard-metrics-scraper-77bf4d6c4c-wzzm4 kubernetes-dashboard-855c9754f9-htp94: exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.42s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (369.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [dd642869-ae38-45a0-ad01-05dcfff73601] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.00394246s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-351278 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-351278 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-351278 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-351278 apply -f testdata/storage-provisioner/pod.yaml
I0919 22:25:06.239434   18671 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [016b4f51-c35b-4a5d-890c-75d3735f5b43] Pending
helpers_test.go:352: "sp-pod" [016b4f51-c35b-4a5d-890c-75d3735f5b43] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0919 22:26:36.656114   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/addons-266998/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:27:04.372111   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/addons-266998/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_pvc_test.go:140: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 6m0s: context deadline exceeded ****
functional_test_pvc_test.go:140: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-351278 -n functional-351278
functional_test_pvc_test.go:140: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-09-19 22:31:06.519695077 +0000 UTC m=+1021.917669280
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-351278 describe po sp-pod -n default
functional_test_pvc_test.go:140: (dbg) kubectl --context functional-351278 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-351278/192.168.39.95
Start Time:       Fri, 19 Sep 2025 22:25:06 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:  10.244.0.9
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zqhg9 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-zqhg9:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  6m                   default-scheduler  Successfully assigned default/sp-pod to functional-351278
Warning  Failed     61s (x2 over 3m34s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     61s (x2 over 3m34s)  kubelet            Error: ErrImagePull
Normal   BackOff    46s (x2 over 3m33s)  kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     46s (x2 over 3m33s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    32s (x3 over 6m)     kubelet            Pulling image "docker.io/nginx"
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-351278 logs sp-pod -n default
functional_test_pvc_test.go:140: (dbg) Non-zero exit: kubectl --context functional-351278 logs sp-pod -n default: exit status 1 (75.854162ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:140: kubectl --context functional-351278 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:141: failed waiting for pvctest pod : test=storage-provisioner within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-351278 -n functional-351278
helpers_test.go:252: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-351278 logs -n 25: (1.525875656s)
helpers_test.go:260: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                             ARGS                                                                             │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-351278 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                     │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:24 UTC │ 19 Sep 25 22:24 UTC │
	│ ssh     │ functional-351278 ssh -n functional-351278 sudo cat /home/docker/cp-test.txt                                                                                 │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:24 UTC │ 19 Sep 25 22:24 UTC │
	│ ssh     │ functional-351278 ssh sudo cat /etc/ssl/certs/186712.pem                                                                                                     │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:24 UTC │ 19 Sep 25 22:24 UTC │
	│ cp      │ functional-351278 cp functional-351278:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3792593431/001/cp-test.txt                                   │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:24 UTC │ 19 Sep 25 22:24 UTC │
	│ image   │ functional-351278 image load --daemon kicbase/echo-server:functional-351278 --alsologtostderr                                                                │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:24 UTC │ 19 Sep 25 22:25 UTC │
	│ ssh     │ functional-351278 ssh sudo cat /usr/share/ca-certificates/186712.pem                                                                                         │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:24 UTC │ 19 Sep 25 22:24 UTC │
	│ ssh     │ functional-351278 ssh -n functional-351278 sudo cat /home/docker/cp-test.txt                                                                                 │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:24 UTC │ 19 Sep 25 22:24 UTC │
	│ ssh     │ functional-351278 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                     │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:24 UTC │ 19 Sep 25 22:24 UTC │
	│ cp      │ functional-351278 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                                    │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:24 UTC │ 19 Sep 25 22:24 UTC │
	│ ssh     │ functional-351278 ssh echo hello                                                                                                                             │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:24 UTC │ 19 Sep 25 22:24 UTC │
	│ ssh     │ functional-351278 ssh -n functional-351278 sudo cat /tmp/does/not/exist/cp-test.txt                                                                          │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:24 UTC │ 19 Sep 25 22:24 UTC │
	│ ssh     │ functional-351278 ssh cat /etc/hostname                                                                                                                      │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:24 UTC │ 19 Sep 25 22:24 UTC │
	│ image   │ functional-351278 image ls                                                                                                                                   │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:25 UTC │ 19 Sep 25 22:25 UTC │
	│ image   │ functional-351278 image load --daemon kicbase/echo-server:functional-351278 --alsologtostderr                                                                │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:25 UTC │ 19 Sep 25 22:25 UTC │
	│ addons  │ functional-351278 addons list                                                                                                                                │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:25 UTC │ 19 Sep 25 22:25 UTC │
	│ addons  │ functional-351278 addons list -o json                                                                                                                        │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:25 UTC │ 19 Sep 25 22:25 UTC │
	│ image   │ functional-351278 image ls                                                                                                                                   │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:25 UTC │ 19 Sep 25 22:25 UTC │
	│ image   │ functional-351278 image load --daemon kicbase/echo-server:functional-351278 --alsologtostderr                                                                │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:25 UTC │ 19 Sep 25 22:25 UTC │
	│ image   │ functional-351278 image ls                                                                                                                                   │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:25 UTC │ 19 Sep 25 22:25 UTC │
	│ image   │ functional-351278 image save kicbase/echo-server:functional-351278 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:25 UTC │ 19 Sep 25 22:25 UTC │
	│ image   │ functional-351278 image rm kicbase/echo-server:functional-351278 --alsologtostderr                                                                           │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:25 UTC │ 19 Sep 25 22:25 UTC │
	│ image   │ functional-351278 image ls                                                                                                                                   │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:25 UTC │ 19 Sep 25 22:25 UTC │
	│ image   │ functional-351278 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr                                       │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:25 UTC │ 19 Sep 25 22:25 UTC │
	│ image   │ functional-351278 image ls                                                                                                                                   │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:25 UTC │ 19 Sep 25 22:25 UTC │
	│ image   │ functional-351278 image save --daemon kicbase/echo-server:functional-351278 --alsologtostderr                                                                │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:25 UTC │ 19 Sep 25 22:25 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/19 22:24:07
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 22:24:07.955964   24965 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:24:07.956190   24965 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:24:07.956194   24965 out.go:374] Setting ErrFile to fd 2...
	I0919 22:24:07.956197   24965 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:24:07.956429   24965 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14764/.minikube/bin
	I0919 22:24:07.956877   24965 out.go:368] Setting JSON to false
	I0919 22:24:07.957753   24965 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3975,"bootTime":1758316673,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 22:24:07.957830   24965 start.go:140] virtualization: kvm guest
	I0919 22:24:07.959654   24965 out.go:179] * [functional-351278] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0919 22:24:07.960838   24965 notify.go:220] Checking for updates...
	I0919 22:24:07.960856   24965 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 22:24:07.961937   24965 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 22:24:07.963054   24965 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-14764/kubeconfig
	I0919 22:24:07.964202   24965 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14764/.minikube
	I0919 22:24:07.965171   24965 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 22:24:07.966096   24965 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 22:24:07.967445   24965 config.go:182] Loaded profile config "functional-351278": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:24:07.967545   24965 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 22:24:07.967991   24965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 22:24:07.968042   24965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 22:24:07.982055   24965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34987
	I0919 22:24:07.982559   24965 main.go:141] libmachine: () Calling .GetVersion
	I0919 22:24:07.983091   24965 main.go:141] libmachine: Using API Version  1
	I0919 22:24:07.983108   24965 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 22:24:07.983459   24965 main.go:141] libmachine: () Calling .GetMachineName
	I0919 22:24:07.983649   24965 main.go:141] libmachine: (functional-351278) Calling .DriverName
	I0919 22:24:08.015095   24965 out.go:179] * Using the kvm2 driver based on existing profile
	I0919 22:24:08.016116   24965 start.go:304] selected driver: kvm2
	I0919 22:24:08.016139   24965 start.go:918] validating driver "kvm2" against &{Name:functional-351278 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.37.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-351278 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.95 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:24:08.016225   24965 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 22:24:08.016628   24965 install.go:51] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:24:08.016713   24965 install.go:123] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21594-14764/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0919 22:24:08.030575   24965 install.go:134] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is {Version:v1.37.0 Commit:6e37ee63f758843bb5fe33c3a528c564c4b83d53}
	I0919 22:24:08.030594   24965 install.go:123] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21594-14764/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0919 22:24:08.044540   24965 install.go:134] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is {Version:v1.37.0 Commit:6e37ee63f758843bb5fe33c3a528c564c4b83d53}
	I0919 22:24:08.045263   24965 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:24:08.045286   24965 cni.go:84] Creating CNI manager for ""
	I0919 22:24:08.045339   24965 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0919 22:24:08.045387   24965 start.go:348] cluster config:
	{Name:functional-351278 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.37.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-351278 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.95 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:24:08.045472   24965 iso.go:125] acquiring lock: {Name:mk21ede999fca7478b081d3e470ef3cc88b140f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:24:08.047162   24965 out.go:179] * Starting "functional-351278" primary control-plane node in "functional-351278" cluster
	I0919 22:24:08.048310   24965 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 22:24:08.048340   24965 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21594-14764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0919 22:24:08.048357   24965 cache.go:58] Caching tarball of preloaded images
	I0919 22:24:08.048435   24965 preload.go:172] Found /home/jenkins/minikube-integration/21594-14764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 22:24:08.048440   24965 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0919 22:24:08.048533   24965 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/functional-351278/config.json ...
	I0919 22:24:08.048715   24965 start.go:360] acquireMachinesLock for functional-351278: {Name:mke6cd936cf5da66e4fbcd4dcd8a2d3d3cae6c7b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 22:24:08.048781   24965 start.go:364] duration metric: took 32.022µs to acquireMachinesLock for "functional-351278"
	I0919 22:24:08.048792   24965 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:24:08.048796   24965 fix.go:54] fixHost starting: 
	I0919 22:24:08.049037   24965 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 22:24:08.049061   24965 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 22:24:08.062592   24965 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36447
	I0919 22:24:08.063052   24965 main.go:141] libmachine: () Calling .GetVersion
	I0919 22:24:08.063471   24965 main.go:141] libmachine: Using API Version  1
	I0919 22:24:08.063483   24965 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 22:24:08.063843   24965 main.go:141] libmachine: () Calling .GetMachineName
	I0919 22:24:08.064029   24965 main.go:141] libmachine: (functional-351278) Calling .DriverName
	I0919 22:24:08.064164   24965 main.go:141] libmachine: (functional-351278) Calling .GetState
	I0919 22:24:08.065944   24965 fix.go:112] recreateIfNeeded on functional-351278: state=Running err=<nil>
	W0919 22:24:08.065959   24965 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:24:08.067751   24965 out.go:252] * Updating the running kvm2 "functional-351278" VM ...
	I0919 22:24:08.067776   24965 machine.go:93] provisionDockerMachine start ...
	I0919 22:24:08.067786   24965 main.go:141] libmachine: (functional-351278) Calling .DriverName
	I0919 22:24:08.067979   24965 main.go:141] libmachine: (functional-351278) Calling .GetSSHHostname
	I0919 22:24:08.070507   24965 main.go:141] libmachine: (functional-351278) DBG | domain functional-351278 has defined MAC address 52:54:00:10:7c:f3 in network mk-functional-351278
	I0919 22:24:08.070985   24965 main.go:141] libmachine: (functional-351278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:7c:f3", ip: ""} in network mk-functional-351278: {Iface:virbr1 ExpiryTime:2025-09-19 23:22:28 +0000 UTC Type:0 Mac:52:54:00:10:7c:f3 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:functional-351278 Clientid:01:52:54:00:10:7c:f3}
	I0919 22:24:08.071007   24965 main.go:141] libmachine: (functional-351278) DBG | domain functional-351278 has defined IP address 192.168.39.95 and MAC address 52:54:00:10:7c:f3 in network mk-functional-351278
	I0919 22:24:08.071165   24965 main.go:141] libmachine: (functional-351278) Calling .GetSSHPort
	I0919 22:24:08.071344   24965 main.go:141] libmachine: (functional-351278) Calling .GetSSHKeyPath
	I0919 22:24:08.071483   24965 main.go:141] libmachine: (functional-351278) Calling .GetSSHKeyPath
	I0919 22:24:08.071604   24965 main.go:141] libmachine: (functional-351278) Calling .GetSSHUsername
	I0919 22:24:08.071752   24965 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:08.071951   24965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0919 22:24:08.071955   24965 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:24:08.178091   24965 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-351278
	
	I0919 22:24:08.178111   24965 main.go:141] libmachine: (functional-351278) Calling .GetMachineName
	I0919 22:24:08.178343   24965 buildroot.go:166] provisioning hostname "functional-351278"
	I0919 22:24:08.178367   24965 main.go:141] libmachine: (functional-351278) Calling .GetMachineName
	I0919 22:24:08.178530   24965 main.go:141] libmachine: (functional-351278) Calling .GetSSHHostname
	I0919 22:24:08.181383   24965 main.go:141] libmachine: (functional-351278) DBG | domain functional-351278 has defined MAC address 52:54:00:10:7c:f3 in network mk-functional-351278
	I0919 22:24:08.181888   24965 main.go:141] libmachine: (functional-351278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:7c:f3", ip: ""} in network mk-functional-351278: {Iface:virbr1 ExpiryTime:2025-09-19 23:22:28 +0000 UTC Type:0 Mac:52:54:00:10:7c:f3 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:functional-351278 Clientid:01:52:54:00:10:7c:f3}
	I0919 22:24:08.181910   24965 main.go:141] libmachine: (functional-351278) DBG | domain functional-351278 has defined IP address 192.168.39.95 and MAC address 52:54:00:10:7c:f3 in network mk-functional-351278
	I0919 22:24:08.182106   24965 main.go:141] libmachine: (functional-351278) Calling .GetSSHPort
	I0919 22:24:08.182295   24965 main.go:141] libmachine: (functional-351278) Calling .GetSSHKeyPath
	I0919 22:24:08.182443   24965 main.go:141] libmachine: (functional-351278) Calling .GetSSHKeyPath
	I0919 22:24:08.182581   24965 main.go:141] libmachine: (functional-351278) Calling .GetSSHUsername
	I0919 22:24:08.182769   24965 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:08.182949   24965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0919 22:24:08.182956   24965 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-351278 && echo "functional-351278" | sudo tee /etc/hostname
	I0919 22:24:08.309497   24965 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-351278
	
	I0919 22:24:08.309510   24965 main.go:141] libmachine: (functional-351278) Calling .GetSSHHostname
	I0919 22:24:08.312458   24965 main.go:141] libmachine: (functional-351278) DBG | domain functional-351278 has defined MAC address 52:54:00:10:7c:f3 in network mk-functional-351278
	I0919 22:24:08.312865   24965 main.go:141] libmachine: (functional-351278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:7c:f3", ip: ""} in network mk-functional-351278: {Iface:virbr1 ExpiryTime:2025-09-19 23:22:28 +0000 UTC Type:0 Mac:52:54:00:10:7c:f3 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:functional-351278 Clientid:01:52:54:00:10:7c:f3}
	I0919 22:24:08.312901   24965 main.go:141] libmachine: (functional-351278) DBG | domain functional-351278 has defined IP address 192.168.39.95 and MAC address 52:54:00:10:7c:f3 in network mk-functional-351278
	I0919 22:24:08.313109   24965 main.go:141] libmachine: (functional-351278) Calling .GetSSHPort
	I0919 22:24:08.313296   24965 main.go:141] libmachine: (functional-351278) Calling .GetSSHKeyPath
	I0919 22:24:08.313470   24965 main.go:141] libmachine: (functional-351278) Calling .GetSSHKeyPath
	I0919 22:24:08.313619   24965 main.go:141] libmachine: (functional-351278) Calling .GetSSHUsername
	I0919 22:24:08.313796   24965 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:08.314019   24965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0919 22:24:08.314032   24965 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-351278' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-351278/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-351278' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:24:08.429454   24965 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:24:08.429472   24965 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14764/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14764/.minikube}
	I0919 22:24:08.429493   24965 buildroot.go:174] setting up certificates
	I0919 22:24:08.429503   24965 provision.go:84] configureAuth start
	I0919 22:24:08.429513   24965 main.go:141] libmachine: (functional-351278) Calling .GetMachineName
	I0919 22:24:08.429793   24965 main.go:141] libmachine: (functional-351278) Calling .GetIP
	I0919 22:24:08.433153   24965 main.go:141] libmachine: (functional-351278) DBG | domain functional-351278 has defined MAC address 52:54:00:10:7c:f3 in network mk-functional-351278
	I0919 22:24:08.433618   24965 main.go:141] libmachine: (functional-351278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:7c:f3", ip: ""} in network mk-functional-351278: {Iface:virbr1 ExpiryTime:2025-09-19 23:22:28 +0000 UTC Type:0 Mac:52:54:00:10:7c:f3 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:functional-351278 Clientid:01:52:54:00:10:7c:f3}
	I0919 22:24:08.433635   24965 main.go:141] libmachine: (functional-351278) DBG | domain functional-351278 has defined IP address 192.168.39.95 and MAC address 52:54:00:10:7c:f3 in network mk-functional-351278
	I0919 22:24:08.433917   24965 main.go:141] libmachine: (functional-351278) Calling .GetSSHHostname
	I0919 22:24:08.436537   24965 main.go:141] libmachine: (functional-351278) DBG | domain functional-351278 has defined MAC address 52:54:00:10:7c:f3 in network mk-functional-351278
	I0919 22:24:08.436938   24965 main.go:141] libmachine: (functional-351278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:7c:f3", ip: ""} in network mk-functional-351278: {Iface:virbr1 ExpiryTime:2025-09-19 23:22:28 +0000 UTC Type:0 Mac:52:54:00:10:7c:f3 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:functional-351278 Clientid:01:52:54:00:10:7c:f3}
	I0919 22:24:08.436958   24965 main.go:141] libmachine: (functional-351278) DBG | domain functional-351278 has defined IP address 192.168.39.95 and MAC address 52:54:00:10:7c:f3 in network mk-functional-351278
	I0919 22:24:08.437176   24965 provision.go:143] copyHostCerts
	I0919 22:24:08.437217   24965 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14764/.minikube/ca.pem, removing ...
	I0919 22:24:08.437231   24965 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14764/.minikube/ca.pem
	I0919 22:24:08.437301   24965 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14764/.minikube/ca.pem (1082 bytes)
	I0919 22:24:08.437403   24965 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14764/.minikube/cert.pem, removing ...
	I0919 22:24:08.437407   24965 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14764/.minikube/cert.pem
	I0919 22:24:08.437431   24965 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14764/.minikube/cert.pem (1123 bytes)
	I0919 22:24:08.437497   24965 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14764/.minikube/key.pem, removing ...
	I0919 22:24:08.437500   24965 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14764/.minikube/key.pem
	I0919 22:24:08.437521   24965 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14764/.minikube/key.pem (1679 bytes)
	I0919 22:24:08.437586   24965 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14764/.minikube/certs/ca-key.pem org=jenkins.functional-351278 san=[127.0.0.1 192.168.39.95 functional-351278 localhost minikube]
	I0919 22:24:08.599566   24965 provision.go:177] copyRemoteCerts
	I0919 22:24:08.599606   24965 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:24:08.599625   24965 main.go:141] libmachine: (functional-351278) Calling .GetSSHHostname
	I0919 22:24:08.602494   24965 main.go:141] libmachine: (functional-351278) DBG | domain functional-351278 has defined MAC address 52:54:00:10:7c:f3 in network mk-functional-351278
	I0919 22:24:08.602795   24965 main.go:141] libmachine: (functional-351278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:7c:f3", ip: ""} in network mk-functional-351278: {Iface:virbr1 ExpiryTime:2025-09-19 23:22:28 +0000 UTC Type:0 Mac:52:54:00:10:7c:f3 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:functional-351278 Clientid:01:52:54:00:10:7c:f3}
	I0919 22:24:08.602822   24965 main.go:141] libmachine: (functional-351278) DBG | domain functional-351278 has defined IP address 192.168.39.95 and MAC address 52:54:00:10:7c:f3 in network mk-functional-351278
	I0919 22:24:08.603009   24965 main.go:141] libmachine: (functional-351278) Calling .GetSSHPort
	I0919 22:24:08.603185   24965 main.go:141] libmachine: (functional-351278) Calling .GetSSHKeyPath
	I0919 22:24:08.603335   24965 main.go:141] libmachine: (functional-351278) Calling .GetSSHUsername
	I0919 22:24:08.603494   24965 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21594-14764/.minikube/machines/functional-351278/id_rsa Username:docker}
	I0919 22:24:08.688808   24965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 22:24:08.722137   24965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14764/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0919 22:24:08.753995   24965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 22:24:08.786388   24965 provision.go:87] duration metric: took 356.874601ms to configureAuth
	I0919 22:24:08.786405   24965 buildroot.go:189] setting minikube options for container-runtime
	I0919 22:24:08.786583   24965 config.go:182] Loaded profile config "functional-351278": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:24:08.786655   24965 main.go:141] libmachine: (functional-351278) Calling .GetSSHHostname
	I0919 22:24:08.789703   24965 main.go:141] libmachine: (functional-351278) DBG | domain functional-351278 has defined MAC address 52:54:00:10:7c:f3 in network mk-functional-351278
	I0919 22:24:08.790112   24965 main.go:141] libmachine: (functional-351278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:7c:f3", ip: ""} in network mk-functional-351278: {Iface:virbr1 ExpiryTime:2025-09-19 23:22:28 +0000 UTC Type:0 Mac:52:54:00:10:7c:f3 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:functional-351278 Clientid:01:52:54:00:10:7c:f3}
	I0919 22:24:08.790146   24965 main.go:141] libmachine: (functional-351278) DBG | domain functional-351278 has defined IP address 192.168.39.95 and MAC address 52:54:00:10:7c:f3 in network mk-functional-351278
	I0919 22:24:08.790337   24965 main.go:141] libmachine: (functional-351278) Calling .GetSSHPort
	I0919 22:24:08.790511   24965 main.go:141] libmachine: (functional-351278) Calling .GetSSHKeyPath
	I0919 22:24:08.790646   24965 main.go:141] libmachine: (functional-351278) Calling .GetSSHKeyPath
	I0919 22:24:08.790778   24965 main.go:141] libmachine: (functional-351278) Calling .GetSSHUsername
	I0919 22:24:08.790899   24965 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:08.791088   24965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0919 22:24:08.791097   24965 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 22:24:14.444677   24965 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 22:24:14.444695   24965 machine.go:96] duration metric: took 6.3769127s to provisionDockerMachine
	I0919 22:24:14.444706   24965 start.go:293] postStartSetup for "functional-351278" (driver="kvm2")
	I0919 22:24:14.444718   24965 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:24:14.444759   24965 main.go:141] libmachine: (functional-351278) Calling .DriverName
	I0919 22:24:14.445079   24965 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:24:14.445097   24965 main.go:141] libmachine: (functional-351278) Calling .GetSSHHostname
	I0919 22:24:14.448616   24965 main.go:141] libmachine: (functional-351278) DBG | domain functional-351278 has defined MAC address 52:54:00:10:7c:f3 in network mk-functional-351278
	I0919 22:24:14.449103   24965 main.go:141] libmachine: (functional-351278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:7c:f3", ip: ""} in network mk-functional-351278: {Iface:virbr1 ExpiryTime:2025-09-19 23:22:28 +0000 UTC Type:0 Mac:52:54:00:10:7c:f3 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:functional-351278 Clientid:01:52:54:00:10:7c:f3}
	I0919 22:24:14.449124   24965 main.go:141] libmachine: (functional-351278) DBG | domain functional-351278 has defined IP address 192.168.39.95 and MAC address 52:54:00:10:7c:f3 in network mk-functional-351278
	I0919 22:24:14.449342   24965 main.go:141] libmachine: (functional-351278) Calling .GetSSHPort
	I0919 22:24:14.449509   24965 main.go:141] libmachine: (functional-351278) Calling .GetSSHKeyPath
	I0919 22:24:14.449651   24965 main.go:141] libmachine: (functional-351278) Calling .GetSSHUsername
	I0919 22:24:14.449811   24965 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21594-14764/.minikube/machines/functional-351278/id_rsa Username:docker}
	I0919 22:24:14.535464   24965 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:24:14.540742   24965 info.go:137] Remote host: Buildroot 2025.02
	I0919 22:24:14.540760   24965 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14764/.minikube/addons for local assets ...
	I0919 22:24:14.540827   24965 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14764/.minikube/files for local assets ...
	I0919 22:24:14.540892   24965 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14764/.minikube/files/etc/ssl/certs/186712.pem -> 186712.pem in /etc/ssl/certs
	I0919 22:24:14.540969   24965 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14764/.minikube/files/etc/test/nested/copy/18671/hosts -> hosts in /etc/test/nested/copy/18671
	I0919 22:24:14.541000   24965 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/18671
	I0919 22:24:14.553321   24965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14764/.minikube/files/etc/ssl/certs/186712.pem --> /etc/ssl/certs/186712.pem (1708 bytes)
	I0919 22:24:14.584713   24965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14764/.minikube/files/etc/test/nested/copy/18671/hosts --> /etc/test/nested/copy/18671/hosts (40 bytes)
	I0919 22:24:14.616212   24965 start.go:296] duration metric: took 171.493957ms for postStartSetup
	I0919 22:24:14.616239   24965 fix.go:56] duration metric: took 6.567441996s for fixHost
	I0919 22:24:14.616259   24965 main.go:141] libmachine: (functional-351278) Calling .GetSSHHostname
	I0919 22:24:14.619058   24965 main.go:141] libmachine: (functional-351278) DBG | domain functional-351278 has defined MAC address 52:54:00:10:7c:f3 in network mk-functional-351278
	I0919 22:24:14.619402   24965 main.go:141] libmachine: (functional-351278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:7c:f3", ip: ""} in network mk-functional-351278: {Iface:virbr1 ExpiryTime:2025-09-19 23:22:28 +0000 UTC Type:0 Mac:52:54:00:10:7c:f3 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:functional-351278 Clientid:01:52:54:00:10:7c:f3}
	I0919 22:24:14.619421   24965 main.go:141] libmachine: (functional-351278) DBG | domain functional-351278 has defined IP address 192.168.39.95 and MAC address 52:54:00:10:7c:f3 in network mk-functional-351278
	I0919 22:24:14.619637   24965 main.go:141] libmachine: (functional-351278) Calling .GetSSHPort
	I0919 22:24:14.619898   24965 main.go:141] libmachine: (functional-351278) Calling .GetSSHKeyPath
	I0919 22:24:14.620088   24965 main.go:141] libmachine: (functional-351278) Calling .GetSSHKeyPath
	I0919 22:24:14.620262   24965 main.go:141] libmachine: (functional-351278) Calling .GetSSHUsername
	I0919 22:24:14.620403   24965 main.go:141] libmachine: Using SSH client type: native
	I0919 22:24:14.620655   24965 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0919 22:24:14.620663   24965 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0919 22:24:14.729718   24965 main.go:141] libmachine: SSH cmd err, output: <nil>: 1758320654.722218159
	
	I0919 22:24:14.729749   24965 fix.go:216] guest clock: 1758320654.722218159
	I0919 22:24:14.729758   24965 fix.go:229] Guest: 2025-09-19 22:24:14.722218159 +0000 UTC Remote: 2025-09-19 22:24:14.616241684 +0000 UTC m=+6.696728601 (delta=105.976475ms)
	I0919 22:24:14.729819   24965 fix.go:200] guest clock delta is within tolerance: 105.976475ms
	I0919 22:24:14.729825   24965 start.go:83] releasing machines lock for "functional-351278", held for 6.681037202s
	I0919 22:24:14.729855   24965 main.go:141] libmachine: (functional-351278) Calling .DriverName
	I0919 22:24:14.730132   24965 main.go:141] libmachine: (functional-351278) Calling .GetIP
	I0919 22:24:14.733309   24965 main.go:141] libmachine: (functional-351278) DBG | domain functional-351278 has defined MAC address 52:54:00:10:7c:f3 in network mk-functional-351278
	I0919 22:24:14.733775   24965 main.go:141] libmachine: (functional-351278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:7c:f3", ip: ""} in network mk-functional-351278: {Iface:virbr1 ExpiryTime:2025-09-19 23:22:28 +0000 UTC Type:0 Mac:52:54:00:10:7c:f3 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:functional-351278 Clientid:01:52:54:00:10:7c:f3}
	I0919 22:24:14.733798   24965 main.go:141] libmachine: (functional-351278) DBG | domain functional-351278 has defined IP address 192.168.39.95 and MAC address 52:54:00:10:7c:f3 in network mk-functional-351278
	I0919 22:24:14.733956   24965 main.go:141] libmachine: (functional-351278) Calling .DriverName
	I0919 22:24:14.734456   24965 main.go:141] libmachine: (functional-351278) Calling .DriverName
	I0919 22:24:14.734627   24965 main.go:141] libmachine: (functional-351278) Calling .DriverName
	I0919 22:24:14.734740   24965 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:24:14.734772   24965 main.go:141] libmachine: (functional-351278) Calling .GetSSHHostname
	I0919 22:24:14.734817   24965 ssh_runner.go:195] Run: cat /version.json
	I0919 22:24:14.734834   24965 main.go:141] libmachine: (functional-351278) Calling .GetSSHHostname
	I0919 22:24:14.737982   24965 main.go:141] libmachine: (functional-351278) DBG | domain functional-351278 has defined MAC address 52:54:00:10:7c:f3 in network mk-functional-351278
	I0919 22:24:14.737996   24965 main.go:141] libmachine: (functional-351278) DBG | domain functional-351278 has defined MAC address 52:54:00:10:7c:f3 in network mk-functional-351278
	I0919 22:24:14.738449   24965 main.go:141] libmachine: (functional-351278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:7c:f3", ip: ""} in network mk-functional-351278: {Iface:virbr1 ExpiryTime:2025-09-19 23:22:28 +0000 UTC Type:0 Mac:52:54:00:10:7c:f3 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:functional-351278 Clientid:01:52:54:00:10:7c:f3}
	I0919 22:24:14.738470   24965 main.go:141] libmachine: (functional-351278) DBG | domain functional-351278 has defined IP address 192.168.39.95 and MAC address 52:54:00:10:7c:f3 in network mk-functional-351278
	I0919 22:24:14.738496   24965 main.go:141] libmachine: (functional-351278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:7c:f3", ip: ""} in network mk-functional-351278: {Iface:virbr1 ExpiryTime:2025-09-19 23:22:28 +0000 UTC Type:0 Mac:52:54:00:10:7c:f3 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:functional-351278 Clientid:01:52:54:00:10:7c:f3}
	I0919 22:24:14.738539   24965 main.go:141] libmachine: (functional-351278) DBG | domain functional-351278 has defined IP address 192.168.39.95 and MAC address 52:54:00:10:7c:f3 in network mk-functional-351278
	I0919 22:24:14.738736   24965 main.go:141] libmachine: (functional-351278) Calling .GetSSHPort
	I0919 22:24:14.738931   24965 main.go:141] libmachine: (functional-351278) Calling .GetSSHPort
	I0919 22:24:14.738949   24965 main.go:141] libmachine: (functional-351278) Calling .GetSSHKeyPath
	I0919 22:24:14.739123   24965 main.go:141] libmachine: (functional-351278) Calling .GetSSHUsername
	I0919 22:24:14.739133   24965 main.go:141] libmachine: (functional-351278) Calling .GetSSHKeyPath
	I0919 22:24:14.739343   24965 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21594-14764/.minikube/machines/functional-351278/id_rsa Username:docker}
	I0919 22:24:14.739373   24965 main.go:141] libmachine: (functional-351278) Calling .GetSSHUsername
	I0919 22:24:14.739498   24965 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21594-14764/.minikube/machines/functional-351278/id_rsa Username:docker}
	I0919 22:24:14.818972   24965 ssh_runner.go:195] Run: systemctl --version
	I0919 22:24:14.846798   24965 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 22:24:14.995306   24965 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0919 22:24:15.003307   24965 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0919 22:24:15.003362   24965 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:24:15.015174   24965 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0919 22:24:15.015188   24965 start.go:495] detecting cgroup driver to use...
	I0919 22:24:15.015251   24965 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:24:15.037953   24965 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:24:15.057562   24965 docker.go:218] disabling cri-docker service (if available) ...
	I0919 22:24:15.057605   24965 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 22:24:15.079579   24965 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 22:24:15.096893   24965 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 22:24:15.280834   24965 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 22:24:15.454610   24965 docker.go:234] disabling docker service ...
	I0919 22:24:15.454667   24965 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 22:24:15.486308   24965 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 22:24:15.502935   24965 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 22:24:15.691204   24965 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 22:24:15.864417   24965 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:24:15.880941   24965 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:24:15.905814   24965 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0919 22:24:15.905900   24965 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:24:15.920265   24965 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0919 22:24:15.920311   24965 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:24:15.933811   24965 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:24:15.946842   24965 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:24:15.960579   24965 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:24:15.974766   24965 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:24:15.988018   24965 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:24:16.001952   24965 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:24:16.015549   24965 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:24:16.026404   24965 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:24:16.038613   24965 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:16.212013   24965 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 22:24:23.447701   24965 ssh_runner.go:235] Completed: sudo systemctl restart crio: (7.235664835s)
	I0919 22:24:23.447717   24965 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 22:24:23.447793   24965 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 22:24:23.453950   24965 start.go:563] Will wait 60s for crictl version
	I0919 22:24:23.454009   24965 ssh_runner.go:195] Run: which crictl
	I0919 22:24:23.458775   24965 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:24:23.496945   24965 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0919 22:24:23.497021   24965 ssh_runner.go:195] Run: crio --version
	I0919 22:24:23.537737   24965 ssh_runner.go:195] Run: crio --version
	I0919 22:24:23.573123   24965 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.29.1 ...
	I0919 22:24:23.574559   24965 main.go:141] libmachine: (functional-351278) Calling .GetIP
	I0919 22:24:23.577795   24965 main.go:141] libmachine: (functional-351278) DBG | domain functional-351278 has defined MAC address 52:54:00:10:7c:f3 in network mk-functional-351278
	I0919 22:24:23.578152   24965 main.go:141] libmachine: (functional-351278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:7c:f3", ip: ""} in network mk-functional-351278: {Iface:virbr1 ExpiryTime:2025-09-19 23:22:28 +0000 UTC Type:0 Mac:52:54:00:10:7c:f3 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:functional-351278 Clientid:01:52:54:00:10:7c:f3}
	I0919 22:24:23.578172   24965 main.go:141] libmachine: (functional-351278) DBG | domain functional-351278 has defined IP address 192.168.39.95 and MAC address 52:54:00:10:7c:f3 in network mk-functional-351278
	I0919 22:24:23.578463   24965 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0919 22:24:23.585790   24965 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0919 22:24:23.587135   24965 kubeadm.go:875] updating cluster {Name:functional-351278 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.37.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-351278 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.95 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 22:24:23.587276   24965 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 22:24:23.587336   24965 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 22:24:23.636709   24965 crio.go:514] all images are preloaded for cri-o runtime.
	I0919 22:24:23.636722   24965 crio.go:433] Images already preloaded, skipping extraction
	I0919 22:24:23.636810   24965 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 22:24:23.677772   24965 crio.go:514] all images are preloaded for cri-o runtime.
	I0919 22:24:23.677793   24965 cache_images.go:85] Images are preloaded, skipping loading
	I0919 22:24:23.677800   24965 kubeadm.go:926] updating node { 192.168.39.95 8441 v1.34.0 crio true true} ...
	I0919 22:24:23.677944   24965 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-351278 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.95
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:functional-351278 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:24:23.678037   24965 ssh_runner.go:195] Run: crio config
	I0919 22:24:23.727911   24965 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0919 22:24:23.727940   24965 cni.go:84] Creating CNI manager for ""
	I0919 22:24:23.727953   24965 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0919 22:24:23.727964   24965 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 22:24:23.727983   24965 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.95 APIServerPort:8441 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-351278 NodeName:functional-351278 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.95"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.95 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts
:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 22:24:23.728137   24965 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.95
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-351278"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.95"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.95"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 22:24:23.728191   24965 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:24:23.742376   24965 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:24:23.742436   24965 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0919 22:24:23.755989   24965 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0919 22:24:23.779698   24965 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:24:23.802550   24965 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2067 bytes)
	I0919 22:24:23.825081   24965 ssh_runner.go:195] Run: grep 192.168.39.95	control-plane.minikube.internal$ /etc/hosts
	I0919 22:24:23.829755   24965 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:24:24.014906   24965 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:24:24.035905   24965 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/functional-351278 for IP: 192.168.39.95
	I0919 22:24:24.035923   24965 certs.go:194] generating shared ca certs ...
	I0919 22:24:24.035945   24965 certs.go:226] acquiring lock for ca certs: {Name:mk1fe71ea89348ba0bd576e99c774a344fba186e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:24:24.036105   24965 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14764/.minikube/ca.key
	I0919 22:24:24.036137   24965 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14764/.minikube/proxy-client-ca.key
	I0919 22:24:24.036143   24965 certs.go:256] generating profile certs ...
	I0919 22:24:24.036236   24965 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/functional-351278/client.key
	I0919 22:24:24.036299   24965 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/functional-351278/apiserver.key.4756d6f7
	I0919 22:24:24.036333   24965 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/functional-351278/proxy-client.key
	I0919 22:24:24.036437   24965 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14764/.minikube/certs/18671.pem (1338 bytes)
	W0919 22:24:24.036460   24965 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14764/.minikube/certs/18671_empty.pem, impossibly tiny 0 bytes
	I0919 22:24:24.036465   24965 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14764/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:24:24.036485   24965 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14764/.minikube/certs/ca.pem (1082 bytes)
	I0919 22:24:24.036502   24965 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14764/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:24:24.036520   24965 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14764/.minikube/certs/key.pem (1679 bytes)
	I0919 22:24:24.036553   24965 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14764/.minikube/files/etc/ssl/certs/186712.pem (1708 bytes)
	I0919 22:24:24.037199   24965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:24:24.071014   24965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0919 22:24:24.103304   24965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:24:24.135092   24965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 22:24:24.169483   24965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/functional-351278/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0919 22:24:24.268646   24965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/functional-351278/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:24:24.388756   24965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/functional-351278/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:24:24.470317   24965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/functional-351278/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0919 22:24:24.567444   24965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14764/.minikube/files/etc/ssl/certs/186712.pem --> /usr/share/ca-certificates/186712.pem (1708 bytes)
	I0919 22:24:24.651265   24965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:24:24.748819   24965 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14764/.minikube/certs/18671.pem --> /usr/share/ca-certificates/18671.pem (1338 bytes)
	I0919 22:24:24.822768   24965 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 22:24:24.888013   24965 ssh_runner.go:195] Run: openssl version
	I0919 22:24:24.900674   24965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18671.pem && ln -fs /usr/share/ca-certificates/18671.pem /etc/ssl/certs/18671.pem"
	I0919 22:24:24.936101   24965 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18671.pem
	I0919 22:24:24.953774   24965 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:22 /usr/share/ca-certificates/18671.pem
	I0919 22:24:24.953837   24965 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18671.pem
	I0919 22:24:24.972978   24965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18671.pem /etc/ssl/certs/51391683.0"
	I0919 22:24:25.011692   24965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/186712.pem && ln -fs /usr/share/ca-certificates/186712.pem /etc/ssl/certs/186712.pem"
	I0919 22:24:25.040876   24965 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/186712.pem
	I0919 22:24:25.052661   24965 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:22 /usr/share/ca-certificates/186712.pem
	I0919 22:24:25.052738   24965 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/186712.pem
	I0919 22:24:25.065110   24965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/186712.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:24:25.103499   24965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:24:25.133329   24965 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:25.145910   24965 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:14 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:25.145959   24965 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:24:25.162680   24965 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:24:25.186407   24965 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:24:25.195070   24965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 22:24:25.212907   24965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 22:24:25.230864   24965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 22:24:25.241990   24965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 22:24:25.253312   24965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 22:24:25.273889   24965 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 22:24:25.311865   24965 kubeadm.go:392] StartCluster: {Name:functional-351278 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.37.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-351278 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.95 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:24:25.311930   24965 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0919 22:24:25.311986   24965 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 22:24:25.486357   24965 cri.go:89] found id: "18c2d509c5728590a77ee037c7f0994bafee35e8bf16b51258dee0762a0a53b1"
	I0919 22:24:25.486371   24965 cri.go:89] found id: "eb8f63dfd7cc4378b1a91b1e35eef735c40ccd151cff64e84d64aa5de2121aae"
	I0919 22:24:25.486374   24965 cri.go:89] found id: "a41d060c833a02a6404078af8ea2569c9f77f65721fd004704b5b3e80d50bd61"
	I0919 22:24:25.486376   24965 cri.go:89] found id: "10c598550cc09fff82686c0933621cfc45b23d13c3546265f21a53cd4c27111f"
	I0919 22:24:25.486379   24965 cri.go:89] found id: "9732f6304c59d282334ca0d28d637dd10341a1c86fd79880df7b7519a377526e"
	I0919 22:24:25.486383   24965 cri.go:89] found id: "b7437c0892138980821b8380bea2cf06226e64b4577586864c3d9108bbb920d5"
	I0919 22:24:25.486385   24965 cri.go:89] found id: "a175fc435bf66d545f99e3dcfde247db73b6427fff8f4e6a247b9aa1501416ac"
	I0919 22:24:25.486390   24965 cri.go:89] found id: "eb67abf22ecf3a0f827043b2529a65a52d701db406a4f3154a7e0e9fe75d8479"
	I0919 22:24:25.486393   24965 cri.go:89] found id: "779c31d8053304d5937aeafe03eaeeb6161d6a5493f7f0c4eab240620acbd6c1"
	I0919 22:24:25.486400   24965 cri.go:89] found id: "49244150004a6a12285eb544ea0842bcb4a15fd450ca908ce4a86e4e727047d5"
	I0919 22:24:25.486403   24965 cri.go:89] found id: "83f9ab7b1448c72a4bfbf7955593ac65fa53b83d3abbd6880b6c772e9c836214"
	I0919 22:24:25.486407   24965 cri.go:89] found id: "0dfecd81883fb9e357cda6a70403fc7b88aa92a04f31124e5ae81c8c50aa376e"
	I0919 22:24:25.486410   24965 cri.go:89] found id: "0cde132b3c0ea06933d75c9ea670e2b37b3ea79fcc02db0e96f9c990d5ed0d41"
	I0919 22:24:25.486413   24965 cri.go:89] found id: "cb39521dcb3d439f8c84ae9e415e39ec3ace3f71a4f48aed6fb78fb506376b0d"
	I0919 22:24:25.486415   24965 cri.go:89] found id: ""
	I0919 22:24:25.486466   24965 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-351278 -n functional-351278
helpers_test.go:269: (dbg) Run:  kubectl --context functional-351278 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-node-75c85bcc94-hxq2h hello-node-connect-7d85dfc575-47ht6 mysql-5bb876957f-2hghz sp-pod
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-351278 describe pod hello-node-75c85bcc94-hxq2h hello-node-connect-7d85dfc575-47ht6 mysql-5bb876957f-2hghz sp-pod
helpers_test.go:290: (dbg) kubectl --context functional-351278 describe pod hello-node-75c85bcc94-hxq2h hello-node-connect-7d85dfc575-47ht6 mysql-5bb876957f-2hghz sp-pod:

                                                
                                                
-- stdout --
	Name:             hello-node-75c85bcc94-hxq2h
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-351278/192.168.39.95
	Start Time:       Fri, 19 Sep 2025 22:25:05 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6jtdq (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-6jtdq:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  6m3s                default-scheduler  Successfully assigned default/hello-node-75c85bcc94-hxq2h to functional-351278
	  Warning  Failed     93s (x2 over 4m6s)  kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     93s (x2 over 4m6s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    80s (x2 over 4m6s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     80s (x2 over 4m6s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    66s (x3 over 6m3s)  kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-47ht6
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-351278/192.168.39.95
	Start Time:       Fri, 19 Sep 2025 22:25:01 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:           10.244.0.7
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-s6kzd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-s6kzd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  6m7s                  default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-47ht6 to functional-351278
	  Warning  Failed     4m37s                 kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m4s (x2 over 4m37s)  kubelet            Error: ErrImagePull
	  Warning  Failed     2m4s                  kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    112s (x2 over 4m37s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     112s (x2 over 4m37s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    100s (x3 over 6m7s)   kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             mysql-5bb876957f-2hghz
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-351278/192.168.39.95
	Start Time:       Fri, 19 Sep 2025 22:24:58 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vjr59 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-vjr59:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  6m10s                 default-scheduler  Successfully assigned default/mysql-5bb876957f-2hghz to functional-351278
	  Warning  Failed     5m38s                 kubelet            Failed to pull image "docker.io/mysql:5.7": copying system image from manifest list: determining manifest MIME type for docker://mysql:5.7: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    2m11s (x3 over 6m9s)  kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     30s (x3 over 5m38s)   kubelet            Error: ErrImagePull
	  Warning  Failed     30s (x2 over 2m35s)   kubelet            Failed to pull image "docker.io/mysql:5.7": fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    5s (x4 over 5m38s)    kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     5s (x4 over 5m38s)    kubelet            Error: ImagePullBackOff
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-351278/192.168.39.95
	Start Time:       Fri, 19 Sep 2025 22:25:06 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zqhg9 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-zqhg9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  6m2s                 default-scheduler  Successfully assigned default/sp-pod to functional-351278
	  Warning  Failed     63s (x2 over 3m36s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     63s (x2 over 3m36s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    48s (x2 over 3m35s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     48s (x2 over 3m35s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    34s (x3 over 6m2s)   kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (369.05s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (602.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-351278 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-2hghz" [e83df643-ba88-420d-ade1-6c4a474cd6fd] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
functional_test.go:1804: ***** TestFunctional/parallel/MySQL: pod "app=mysql" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1804: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-351278 -n functional-351278
functional_test.go:1804: TestFunctional/parallel/MySQL: showing logs for failed pods as of 2025-09-19 22:34:58.656128403 +0000 UTC m=+1254.054102611
functional_test.go:1804: (dbg) Run:  kubectl --context functional-351278 describe po mysql-5bb876957f-2hghz -n default
functional_test.go:1804: (dbg) kubectl --context functional-351278 describe po mysql-5bb876957f-2hghz -n default:
Name:             mysql-5bb876957f-2hghz
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-351278/192.168.39.95
Start Time:       Fri, 19 Sep 2025 22:24:58 +0000
Labels:           app=mysql
pod-template-hash=5bb876957f
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/mysql-5bb876957f
Containers:
mysql:
Container ID:   
Image:          docker.io/mysql:5.7
Image ID:       
Port:           3306/TCP (mysql)
Host Port:      0/TCP (mysql)
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Limits:
cpu:     700m
memory:  700Mi
Requests:
cpu:     600m
memory:  512Mi
Environment:
MYSQL_ROOT_PASSWORD:  password
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vjr59 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-vjr59:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/mysql-5bb876957f-2hghz to functional-351278
Warning  Failed     9m28s                  kubelet            Failed to pull image "docker.io/mysql:5.7": copying system image from manifest list: determining manifest MIME type for docker://mysql:5.7: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     4m20s (x2 over 6m25s)  kubelet            Failed to pull image "docker.io/mysql:5.7": fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     105s (x4 over 9m28s)   kubelet            Error: ErrImagePull
Warning  Failed     105s                   kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    29s (x11 over 9m28s)   kubelet            Back-off pulling image "docker.io/mysql:5.7"
Warning  Failed     29s (x11 over 9m28s)   kubelet            Error: ImagePullBackOff
Normal   Pulling    17s (x5 over 9m59s)    kubelet            Pulling image "docker.io/mysql:5.7"
functional_test.go:1804: (dbg) Run:  kubectl --context functional-351278 logs mysql-5bb876957f-2hghz -n default
functional_test.go:1804: (dbg) Non-zero exit: kubectl --context functional-351278 logs mysql-5bb876957f-2hghz -n default: exit status 1 (71.239315ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "mysql" in pod "mysql-5bb876957f-2hghz" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1804: kubectl --context functional-351278 logs mysql-5bb876957f-2hghz -n default: exit status 1
functional_test.go:1806: failed waiting for mysql pod: app=mysql within 10m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/MySQL]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-351278 -n functional-351278
helpers_test.go:252: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-351278 logs -n 25: (1.590134006s)
helpers_test.go:260: TestFunctional/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                ARGS                                                                 │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start     │ -p functional-351278 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:31 UTC │                     │
	│ mount     │ -p functional-351278 /tmp/TestFunctionalparallelMountCmdany-port2270664554/001:/mount-9p --alsologtostderr -v=1                     │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:31 UTC │                     │
	│ ssh       │ functional-351278 ssh findmnt -T /mount-9p | grep 9p                                                                                │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:31 UTC │                     │
	│ ssh       │ functional-351278 ssh findmnt -T /mount-9p | grep 9p                                                                                │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:31 UTC │ 19 Sep 25 22:31 UTC │
	│ ssh       │ functional-351278 ssh -- ls -la /mount-9p                                                                                           │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:31 UTC │ 19 Sep 25 22:31 UTC │
	│ ssh       │ functional-351278 ssh cat /mount-9p/test-1758321069045638215                                                                        │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:31 UTC │ 19 Sep 25 22:31 UTC │
	│ ssh       │ functional-351278 ssh stat /mount-9p/created-by-test                                                                                │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ ssh       │ functional-351278 ssh stat /mount-9p/created-by-pod                                                                                 │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ ssh       │ functional-351278 ssh sudo umount -f /mount-9p                                                                                      │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ ssh       │ functional-351278 ssh findmnt -T /mount-9p | grep 9p                                                                                │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ mount     │ -p functional-351278 /tmp/TestFunctionalparallelMountCmdspecific-port3719278976/001:/mount-9p --alsologtostderr -v=1 --port 46464   │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh       │ functional-351278 ssh findmnt -T /mount-9p | grep 9p                                                                                │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ ssh       │ functional-351278 ssh -- ls -la /mount-9p                                                                                           │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ ssh       │ functional-351278 ssh sudo umount -f /mount-9p                                                                                      │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh       │ functional-351278 ssh findmnt -T /mount1                                                                                            │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ mount     │ -p functional-351278 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2795973708/001:/mount1 --alsologtostderr -v=1                  │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ mount     │ -p functional-351278 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2795973708/001:/mount3 --alsologtostderr -v=1                  │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ mount     │ -p functional-351278 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2795973708/001:/mount2 --alsologtostderr -v=1                  │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ ssh       │ functional-351278 ssh findmnt -T /mount1                                                                                            │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ ssh       │ functional-351278 ssh findmnt -T /mount2                                                                                            │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ ssh       │ functional-351278 ssh findmnt -T /mount3                                                                                            │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:32 UTC │
	│ mount     │ -p functional-351278 --kill=true                                                                                                    │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ start     │ -p functional-351278 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ start     │ -p functional-351278 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false           │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-351278 --alsologtostderr -v=1                                                                      │ functional-351278 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │                     │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/19 22:32:50
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 22:32:50.448994   28990 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:32:50.449267   28990 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:32:50.449277   28990 out.go:374] Setting ErrFile to fd 2...
	I0919 22:32:50.449281   28990 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:32:50.449466   28990 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14764/.minikube/bin
	I0919 22:32:50.449918   28990 out.go:368] Setting JSON to false
	I0919 22:32:50.450889   28990 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4497,"bootTime":1758316673,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 22:32:50.450979   28990 start.go:140] virtualization: kvm guest
	I0919 22:32:50.452926   28990 out.go:179] * [functional-351278] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0919 22:32:50.454152   28990 notify.go:220] Checking for updates...
	I0919 22:32:50.454178   28990 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 22:32:50.455249   28990 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 22:32:50.456277   28990 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-14764/kubeconfig
	I0919 22:32:50.457463   28990 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14764/.minikube
	I0919 22:32:50.458570   28990 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 22:32:50.459746   28990 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 22:32:50.461354   28990 config.go:182] Loaded profile config "functional-351278": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:32:50.461950   28990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 22:32:50.462017   28990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 22:32:50.475786   28990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43741
	I0919 22:32:50.476249   28990 main.go:141] libmachine: () Calling .GetVersion
	I0919 22:32:50.476777   28990 main.go:141] libmachine: Using API Version  1
	I0919 22:32:50.476805   28990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 22:32:50.477159   28990 main.go:141] libmachine: () Calling .GetMachineName
	I0919 22:32:50.477415   28990 main.go:141] libmachine: (functional-351278) Calling .DriverName
	I0919 22:32:50.477658   28990 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 22:32:50.478003   28990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 22:32:50.478045   28990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 22:32:50.492182   28990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36729
	I0919 22:32:50.492616   28990 main.go:141] libmachine: () Calling .GetVersion
	I0919 22:32:50.493058   28990 main.go:141] libmachine: Using API Version  1
	I0919 22:32:50.493081   28990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 22:32:50.493467   28990 main.go:141] libmachine: () Calling .GetMachineName
	I0919 22:32:50.493665   28990 main.go:141] libmachine: (functional-351278) Calling .DriverName
	I0919 22:32:50.526542   28990 out.go:179] * Using the kvm2 driver based on existing profile
	I0919 22:32:50.527765   28990 start.go:304] selected driver: kvm2
	I0919 22:32:50.527780   28990 start.go:918] validating driver "kvm2" against &{Name:functional-351278 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.37.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-351278 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.95 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:32:50.527887   28990 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 22:32:50.528840   28990 cni.go:84] Creating CNI manager for ""
	I0919 22:32:50.528908   28990 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0919 22:32:50.528959   28990 start.go:348] cluster config:
	{Name:functional-351278 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.37.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-351278 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.95 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:32:50.530330   28990 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Sep 19 22:34:59 functional-351278 crio[4941]: time="2025-09-19 22:34:59.556770159Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758321299556746160,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:175576,},InodesUsed:&UInt64Value{Value:87,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7cda4478-f791-4d10-b656-9b42c9447283 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 22:34:59 functional-351278 crio[4941]: time="2025-09-19 22:34:59.557996072Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2eff9cb6-33f7-4282-b425-992dc675bd8d name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 22:34:59 functional-351278 crio[4941]: time="2025-09-19 22:34:59.558200769Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2eff9cb6-33f7-4282-b425-992dc675bd8d name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 22:34:59 functional-351278 crio[4941]: time="2025-09-19 22:34:59.558485948Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5a787edfb8f9c734d3f5475054443381d7a2a79092acbf9a206f87afe5b0a0c5,PodSandboxId:06fdd7820d034f4ae92e5d2d96796018617b28fd316023a16e12d5852a4a5a3f,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1758321162451285058,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dc475075-c430-4387-b9b0-f728391024f1,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f830720c30812bcf9aa33ac20c632924b1c7120a65fd62b5dcfd937a7ae0e617,PodSandboxId:179e1cbc0b9cb7eb9b9f338b3f22ceeb08891407c365ac303f4dd9c6fed4143d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1758320675216064724,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd642869-ae38-45a0-ad01-05dcfff73601,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72cb2cb146fe5847510292057e63e40a96fd64e4f4a76d8696090e6a48ed4351,PodSandboxId:26fc8596bfe94da02bdc7e7484fa0c2ca0695b9450116772839e3a85a024b3ab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1758320675254838006,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gnc2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa84af28-0f0b-418e-9cd9-fa7851c365b3,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6483900c50ab2f70850376c9f661c10fc004d33319d8bf76f53dbc28e3259c8b,PodSandboxId:d7316350ed4bf51f55c971d0eed2f6c875ccfdd174aca0edfa500a9128027f8f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1758320670920170425,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb349a09d58d71c20ea5d2b4f09b994c,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protoc
ol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca9ceb7aad18b856b069a53fa36528acb133c0db8d9b22639ad02163b2a8d256,PodSandboxId:dfc27b667d8fb9502bb99e124254f314b5ede7a16da906e21922b48b9813d110,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1758320670593619725,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 820510e9b04676b04325640c25f4a8e8,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.port
s: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:300bb2063f5a43edc6035ff49f65d585ce99e18457f8ac56a6d29b175cf12e8e,PodSandboxId:50d153de402bbaf6578165f458cd05f74f00bba57e724d93d9ac9e169199de44,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1758320670610434339,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 12680b25168cedd6455f72c7610a1ddf,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:654bd01629fdeca3657986cd26bfffe3f8a55afa36789f47b07bc6d11d84a780,PodSandboxId:eb5586f63d27cbb5f06cca606652dc3863f5c2a62ae63f01b8466cdc4199c0f8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1758320670574957041,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kuber
netes.pod.name: kube-scheduler-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67dcfe94bb7274dbb92a0f8ed062f93a,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d062b037dd6fb189144a98d2de6f1c0d776f2e6505f52278691cb46e10a23d74,PodSandboxId:16b9b7066729ddc70087ac98b62dbfca1d172e0e6e3fbdbcd4b64c1188c97b60,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,Creat
edAt:1758320666084991194,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-kcks9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d98d841-9889-4f5a-b46d-4c15a211c7e9,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48123667739fa5c6e4b0d2d5215d09e671990614ccd0d3d4fc54cf1ff733e76f,PodSandboxId:dfc27b667d8fb9502bb99e124254f314b5ed
e7a16da906e21922b48b9813d110,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1758320665174114998,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 820510e9b04676b04325640c25f4a8e8,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18c2d509c5728590a77ee037c7f0994bafee3
5e8bf16b51258dee0762a0a53b1,PodSandboxId:179e1cbc0b9cb7eb9b9f338b3f22ceeb08891407c365ac303f4dd9c6fed4143d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1758320665022788748,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd642869-ae38-45a0-ad01-05dcfff73601,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb8f63dfd7cc4378b1a91b1e35eef735c40ccd151cff64e84d
64aa5de2121aae,PodSandboxId:26fc8596bfe94da02bdc7e7484fa0c2ca0695b9450116772839e3a85a024b3ab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1758320664876226298,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gnc2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa84af28-0f0b-418e-9cd9-fa7851c365b3,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a41d060c833a02a6404078af8ea2569c9f77f65721fd004704b5b3e80d50bd61,PodSandboxId:eb5586f
63d27cbb5f06cca606652dc3863f5c2a62ae63f01b8466cdc4199c0f8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1758320664849656479,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67dcfe94bb7274dbb92a0f8ed062f93a,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePerio
d: 30,},},&Container{Id:10c598550cc09fff82686c0933621cfc45b23d13c3546265f21a53cd4c27111f,PodSandboxId:50d153de402bbaf6578165f458cd05f74f00bba57e724d93d9ac9e169199de44,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1758320664820547401,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12680b25168cedd6455f72c7610a1ddf,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9732f6304c59d282334ca0d28d637dd10341a1c86fd79880df7b7519a377526e,PodSandboxId:6467ca6fcd998e43c87b7a92b68a257970d17335a3ee9c72c87257f3a6f6c0c5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1758320628073012264,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-kcks9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d98d841-9889-4f5a-b46d-4c15a211c7e9,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2eff9cb6-33f7-4282-b425-992dc675bd8d name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 22:34:59 functional-351278 crio[4941]: time="2025-09-19 22:34:59.607471319Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b22fadc7-cf7c-49c9-a2a1-0cdeb76a6bcb name=/runtime.v1.RuntimeService/Version
	Sep 19 22:34:59 functional-351278 crio[4941]: time="2025-09-19 22:34:59.607559360Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b22fadc7-cf7c-49c9-a2a1-0cdeb76a6bcb name=/runtime.v1.RuntimeService/Version
	Sep 19 22:34:59 functional-351278 crio[4941]: time="2025-09-19 22:34:59.610517371Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c08dfc6d-7088-4549-8f6b-552fe3555dbf name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 22:34:59 functional-351278 crio[4941]: time="2025-09-19 22:34:59.611755982Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758321299611725403,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:175576,},InodesUsed:&UInt64Value{Value:87,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c08dfc6d-7088-4549-8f6b-552fe3555dbf name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 22:34:59 functional-351278 crio[4941]: time="2025-09-19 22:34:59.612790835Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=34b2426a-6196-4b45-a0cb-4c2d4246077b name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 22:34:59 functional-351278 crio[4941]: time="2025-09-19 22:34:59.612956729Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=34b2426a-6196-4b45-a0cb-4c2d4246077b name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 22:34:59 functional-351278 crio[4941]: time="2025-09-19 22:34:59.613306568Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5a787edfb8f9c734d3f5475054443381d7a2a79092acbf9a206f87afe5b0a0c5,PodSandboxId:06fdd7820d034f4ae92e5d2d96796018617b28fd316023a16e12d5852a4a5a3f,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1758321162451285058,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dc475075-c430-4387-b9b0-f728391024f1,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f830720c30812bcf9aa33ac20c632924b1c7120a65fd62b5dcfd937a7ae0e617,PodSandboxId:179e1cbc0b9cb7eb9b9f338b3f22ceeb08891407c365ac303f4dd9c6fed4143d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1758320675216064724,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd642869-ae38-45a0-ad01-05dcfff73601,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72cb2cb146fe5847510292057e63e40a96fd64e4f4a76d8696090e6a48ed4351,PodSandboxId:26fc8596bfe94da02bdc7e7484fa0c2ca0695b9450116772839e3a85a024b3ab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1758320675254838006,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gnc2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa84af28-0f0b-418e-9cd9-fa7851c365b3,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6483900c50ab2f70850376c9f661c10fc004d33319d8bf76f53dbc28e3259c8b,PodSandboxId:d7316350ed4bf51f55c971d0eed2f6c875ccfdd174aca0edfa500a9128027f8f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1758320670920170425,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb349a09d58d71c20ea5d2b4f09b994c,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protoc
ol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca9ceb7aad18b856b069a53fa36528acb133c0db8d9b22639ad02163b2a8d256,PodSandboxId:dfc27b667d8fb9502bb99e124254f314b5ede7a16da906e21922b48b9813d110,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1758320670593619725,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 820510e9b04676b04325640c25f4a8e8,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.port
s: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:300bb2063f5a43edc6035ff49f65d585ce99e18457f8ac56a6d29b175cf12e8e,PodSandboxId:50d153de402bbaf6578165f458cd05f74f00bba57e724d93d9ac9e169199de44,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1758320670610434339,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 12680b25168cedd6455f72c7610a1ddf,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:654bd01629fdeca3657986cd26bfffe3f8a55afa36789f47b07bc6d11d84a780,PodSandboxId:eb5586f63d27cbb5f06cca606652dc3863f5c2a62ae63f01b8466cdc4199c0f8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1758320670574957041,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kuber
netes.pod.name: kube-scheduler-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67dcfe94bb7274dbb92a0f8ed062f93a,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d062b037dd6fb189144a98d2de6f1c0d776f2e6505f52278691cb46e10a23d74,PodSandboxId:16b9b7066729ddc70087ac98b62dbfca1d172e0e6e3fbdbcd4b64c1188c97b60,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,Creat
edAt:1758320666084991194,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-kcks9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d98d841-9889-4f5a-b46d-4c15a211c7e9,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48123667739fa5c6e4b0d2d5215d09e671990614ccd0d3d4fc54cf1ff733e76f,PodSandboxId:dfc27b667d8fb9502bb99e124254f314b5ed
e7a16da906e21922b48b9813d110,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1758320665174114998,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 820510e9b04676b04325640c25f4a8e8,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18c2d509c5728590a77ee037c7f0994bafee3
5e8bf16b51258dee0762a0a53b1,PodSandboxId:179e1cbc0b9cb7eb9b9f338b3f22ceeb08891407c365ac303f4dd9c6fed4143d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1758320665022788748,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd642869-ae38-45a0-ad01-05dcfff73601,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb8f63dfd7cc4378b1a91b1e35eef735c40ccd151cff64e84d
64aa5de2121aae,PodSandboxId:26fc8596bfe94da02bdc7e7484fa0c2ca0695b9450116772839e3a85a024b3ab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1758320664876226298,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gnc2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa84af28-0f0b-418e-9cd9-fa7851c365b3,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a41d060c833a02a6404078af8ea2569c9f77f65721fd004704b5b3e80d50bd61,PodSandboxId:eb5586f
63d27cbb5f06cca606652dc3863f5c2a62ae63f01b8466cdc4199c0f8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1758320664849656479,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67dcfe94bb7274dbb92a0f8ed062f93a,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePerio
d: 30,},},&Container{Id:10c598550cc09fff82686c0933621cfc45b23d13c3546265f21a53cd4c27111f,PodSandboxId:50d153de402bbaf6578165f458cd05f74f00bba57e724d93d9ac9e169199de44,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1758320664820547401,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12680b25168cedd6455f72c7610a1ddf,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9732f6304c59d282334ca0d28d637dd10341a1c86fd79880df7b7519a377526e,PodSandboxId:6467ca6fcd998e43c87b7a92b68a257970d17335a3ee9c72c87257f3a6f6c0c5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1758320628073012264,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-kcks9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d98d841-9889-4f5a-b46d-4c15a211c7e9,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=34b2426a-6196-4b45-a0cb-4c2d4246077b name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 22:34:59 functional-351278 crio[4941]: time="2025-09-19 22:34:59.650521206Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1d0b0a72-3b50-4395-8d2b-cdef4ac5b5ec name=/runtime.v1.RuntimeService/Version
	Sep 19 22:34:59 functional-351278 crio[4941]: time="2025-09-19 22:34:59.650594022Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1d0b0a72-3b50-4395-8d2b-cdef4ac5b5ec name=/runtime.v1.RuntimeService/Version
	Sep 19 22:34:59 functional-351278 crio[4941]: time="2025-09-19 22:34:59.653026981Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=79e7b78a-62db-49fa-adbb-5f5e0d8365ad name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 22:34:59 functional-351278 crio[4941]: time="2025-09-19 22:34:59.653987699Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758321299653955479,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:175576,},InodesUsed:&UInt64Value{Value:87,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=79e7b78a-62db-49fa-adbb-5f5e0d8365ad name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 22:34:59 functional-351278 crio[4941]: time="2025-09-19 22:34:59.654664610Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6b11d4ba-1ab1-4fdf-a243-65e907175600 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 22:34:59 functional-351278 crio[4941]: time="2025-09-19 22:34:59.654745341Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6b11d4ba-1ab1-4fdf-a243-65e907175600 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 22:34:59 functional-351278 crio[4941]: time="2025-09-19 22:34:59.655221879Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5a787edfb8f9c734d3f5475054443381d7a2a79092acbf9a206f87afe5b0a0c5,PodSandboxId:06fdd7820d034f4ae92e5d2d96796018617b28fd316023a16e12d5852a4a5a3f,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1758321162451285058,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dc475075-c430-4387-b9b0-f728391024f1,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f830720c30812bcf9aa33ac20c632924b1c7120a65fd62b5dcfd937a7ae0e617,PodSandboxId:179e1cbc0b9cb7eb9b9f338b3f22ceeb08891407c365ac303f4dd9c6fed4143d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1758320675216064724,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd642869-ae38-45a0-ad01-05dcfff73601,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72cb2cb146fe5847510292057e63e40a96fd64e4f4a76d8696090e6a48ed4351,PodSandboxId:26fc8596bfe94da02bdc7e7484fa0c2ca0695b9450116772839e3a85a024b3ab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1758320675254838006,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gnc2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa84af28-0f0b-418e-9cd9-fa7851c365b3,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6483900c50ab2f70850376c9f661c10fc004d33319d8bf76f53dbc28e3259c8b,PodSandboxId:d7316350ed4bf51f55c971d0eed2f6c875ccfdd174aca0edfa500a9128027f8f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1758320670920170425,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb349a09d58d71c20ea5d2b4f09b994c,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protoc
ol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca9ceb7aad18b856b069a53fa36528acb133c0db8d9b22639ad02163b2a8d256,PodSandboxId:dfc27b667d8fb9502bb99e124254f314b5ede7a16da906e21922b48b9813d110,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1758320670593619725,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 820510e9b04676b04325640c25f4a8e8,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.port
s: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:300bb2063f5a43edc6035ff49f65d585ce99e18457f8ac56a6d29b175cf12e8e,PodSandboxId:50d153de402bbaf6578165f458cd05f74f00bba57e724d93d9ac9e169199de44,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1758320670610434339,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 12680b25168cedd6455f72c7610a1ddf,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:654bd01629fdeca3657986cd26bfffe3f8a55afa36789f47b07bc6d11d84a780,PodSandboxId:eb5586f63d27cbb5f06cca606652dc3863f5c2a62ae63f01b8466cdc4199c0f8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1758320670574957041,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kuber
netes.pod.name: kube-scheduler-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67dcfe94bb7274dbb92a0f8ed062f93a,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d062b037dd6fb189144a98d2de6f1c0d776f2e6505f52278691cb46e10a23d74,PodSandboxId:16b9b7066729ddc70087ac98b62dbfca1d172e0e6e3fbdbcd4b64c1188c97b60,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,Creat
edAt:1758320666084991194,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-kcks9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d98d841-9889-4f5a-b46d-4c15a211c7e9,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48123667739fa5c6e4b0d2d5215d09e671990614ccd0d3d4fc54cf1ff733e76f,PodSandboxId:dfc27b667d8fb9502bb99e124254f314b5ed
e7a16da906e21922b48b9813d110,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1758320665174114998,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 820510e9b04676b04325640c25f4a8e8,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18c2d509c5728590a77ee037c7f0994bafee3
5e8bf16b51258dee0762a0a53b1,PodSandboxId:179e1cbc0b9cb7eb9b9f338b3f22ceeb08891407c365ac303f4dd9c6fed4143d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1758320665022788748,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd642869-ae38-45a0-ad01-05dcfff73601,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb8f63dfd7cc4378b1a91b1e35eef735c40ccd151cff64e84d
64aa5de2121aae,PodSandboxId:26fc8596bfe94da02bdc7e7484fa0c2ca0695b9450116772839e3a85a024b3ab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1758320664876226298,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gnc2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa84af28-0f0b-418e-9cd9-fa7851c365b3,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a41d060c833a02a6404078af8ea2569c9f77f65721fd004704b5b3e80d50bd61,PodSandboxId:eb5586f
63d27cbb5f06cca606652dc3863f5c2a62ae63f01b8466cdc4199c0f8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1758320664849656479,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67dcfe94bb7274dbb92a0f8ed062f93a,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePerio
d: 30,},},&Container{Id:10c598550cc09fff82686c0933621cfc45b23d13c3546265f21a53cd4c27111f,PodSandboxId:50d153de402bbaf6578165f458cd05f74f00bba57e724d93d9ac9e169199de44,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1758320664820547401,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12680b25168cedd6455f72c7610a1ddf,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9732f6304c59d282334ca0d28d637dd10341a1c86fd79880df7b7519a377526e,PodSandboxId:6467ca6fcd998e43c87b7a92b68a257970d17335a3ee9c72c87257f3a6f6c0c5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1758320628073012264,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-kcks9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d98d841-9889-4f5a-b46d-4c15a211c7e9,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6b11d4ba-1ab1-4fdf-a243-65e907175600 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 22:34:59 functional-351278 crio[4941]: time="2025-09-19 22:34:59.707702343Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=209d8853-1dda-4d8d-a0ee-ea9644baf5e0 name=/runtime.v1.RuntimeService/Version
	Sep 19 22:34:59 functional-351278 crio[4941]: time="2025-09-19 22:34:59.707776414Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=209d8853-1dda-4d8d-a0ee-ea9644baf5e0 name=/runtime.v1.RuntimeService/Version
	Sep 19 22:34:59 functional-351278 crio[4941]: time="2025-09-19 22:34:59.711212681Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c9e0a882-d8de-43d8-85ea-4672ba272ab1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 22:34:59 functional-351278 crio[4941]: time="2025-09-19 22:34:59.711759525Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758321299711735166,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:175576,},InodesUsed:&UInt64Value{Value:87,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c9e0a882-d8de-43d8-85ea-4672ba272ab1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 22:34:59 functional-351278 crio[4941]: time="2025-09-19 22:34:59.712460301Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=65797fa3-d4c3-4bcf-992f-d5db2b323a30 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 22:34:59 functional-351278 crio[4941]: time="2025-09-19 22:34:59.712660155Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=65797fa3-d4c3-4bcf-992f-d5db2b323a30 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 22:34:59 functional-351278 crio[4941]: time="2025-09-19 22:34:59.713825520Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5a787edfb8f9c734d3f5475054443381d7a2a79092acbf9a206f87afe5b0a0c5,PodSandboxId:06fdd7820d034f4ae92e5d2d96796018617b28fd316023a16e12d5852a4a5a3f,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1758321162451285058,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dc475075-c430-4387-b9b0-f728391024f1,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f830720c30812bcf9aa33ac20c632924b1c7120a65fd62b5dcfd937a7ae0e617,PodSandboxId:179e1cbc0b9cb7eb9b9f338b3f22ceeb08891407c365ac303f4dd9c6fed4143d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1758320675216064724,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd642869-ae38-45a0-ad01-05dcfff73601,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72cb2cb146fe5847510292057e63e40a96fd64e4f4a76d8696090e6a48ed4351,PodSandboxId:26fc8596bfe94da02bdc7e7484fa0c2ca0695b9450116772839e3a85a024b3ab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1758320675254838006,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gnc2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa84af28-0f0b-418e-9cd9-fa7851c365b3,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6483900c50ab2f70850376c9f661c10fc004d33319d8bf76f53dbc28e3259c8b,PodSandboxId:d7316350ed4bf51f55c971d0eed2f6c875ccfdd174aca0edfa500a9128027f8f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1758320670920170425,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb349a09d58d71c20ea5d2b4f09b994c,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protoc
ol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca9ceb7aad18b856b069a53fa36528acb133c0db8d9b22639ad02163b2a8d256,PodSandboxId:dfc27b667d8fb9502bb99e124254f314b5ede7a16da906e21922b48b9813d110,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1758320670593619725,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 820510e9b04676b04325640c25f4a8e8,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.port
s: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:300bb2063f5a43edc6035ff49f65d585ce99e18457f8ac56a6d29b175cf12e8e,PodSandboxId:50d153de402bbaf6578165f458cd05f74f00bba57e724d93d9ac9e169199de44,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_RUNNING,CreatedAt:1758320670610434339,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 12680b25168cedd6455f72c7610a1ddf,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:654bd01629fdeca3657986cd26bfffe3f8a55afa36789f47b07bc6d11d84a780,PodSandboxId:eb5586f63d27cbb5f06cca606652dc3863f5c2a62ae63f01b8466cdc4199c0f8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1758320670574957041,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kuber
netes.pod.name: kube-scheduler-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67dcfe94bb7274dbb92a0f8ed062f93a,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d062b037dd6fb189144a98d2de6f1c0d776f2e6505f52278691cb46e10a23d74,PodSandboxId:16b9b7066729ddc70087ac98b62dbfca1d172e0e6e3fbdbcd4b64c1188c97b60,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,Creat
edAt:1758320666084991194,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-kcks9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d98d841-9889-4f5a-b46d-4c15a211c7e9,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48123667739fa5c6e4b0d2d5215d09e671990614ccd0d3d4fc54cf1ff733e76f,PodSandboxId:dfc27b667d8fb9502bb99e124254f314b5ed
e7a16da906e21922b48b9813d110,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1758320665174114998,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 820510e9b04676b04325640c25f4a8e8,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18c2d509c5728590a77ee037c7f0994bafee3
5e8bf16b51258dee0762a0a53b1,PodSandboxId:179e1cbc0b9cb7eb9b9f338b3f22ceeb08891407c365ac303f4dd9c6fed4143d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1758320665022788748,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd642869-ae38-45a0-ad01-05dcfff73601,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb8f63dfd7cc4378b1a91b1e35eef735c40ccd151cff64e84d
64aa5de2121aae,PodSandboxId:26fc8596bfe94da02bdc7e7484fa0c2ca0695b9450116772839e3a85a024b3ab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,State:CONTAINER_EXITED,CreatedAt:1758320664876226298,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gnc2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa84af28-0f0b-418e-9cd9-fa7851c365b3,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a41d060c833a02a6404078af8ea2569c9f77f65721fd004704b5b3e80d50bd61,PodSandboxId:eb5586f
63d27cbb5f06cca606652dc3863f5c2a62ae63f01b8466cdc4199c0f8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_EXITED,CreatedAt:1758320664849656479,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67dcfe94bb7274dbb92a0f8ed062f93a,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePerio
d: 30,},},&Container{Id:10c598550cc09fff82686c0933621cfc45b23d13c3546265f21a53cd4c27111f,PodSandboxId:50d153de402bbaf6578165f458cd05f74f00bba57e724d93d9ac9e169199de44,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTAINER_EXITED,CreatedAt:1758320664820547401,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-351278,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12680b25168cedd6455f72c7610a1ddf,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9732f6304c59d282334ca0d28d637dd10341a1c86fd79880df7b7519a377526e,PodSandboxId:6467ca6fcd998e43c87b7a92b68a257970d17335a3ee9c72c87257f3a6f6c0c5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1758320628073012264,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-kcks9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d98d841-9889-4f5a-b46d-4c15a211c7e9,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=65797fa3-d4c3-4bcf-992f-d5db2b323a30 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5a787edfb8f9c       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   2 minutes ago       Exited              mount-munger              0                   06fdd7820d034       busybox-mount
	72cb2cb146fe5       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                      10 minutes ago      Running             kube-proxy                3                   26fc8596bfe94       kube-proxy-gnc2m
	f830720c30812       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Running             storage-provisioner       4                   179e1cbc0b9cb       storage-provisioner
	6483900c50ab2       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                      10 minutes ago      Running             kube-apiserver            0                   d7316350ed4bf       kube-apiserver-functional-351278
	300bb2063f5a4       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                      10 minutes ago      Running             kube-controller-manager   3                   50d153de402bb       kube-controller-manager-functional-351278
	ca9ceb7aad18b       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      10 minutes ago      Running             etcd                      3                   dfc27b667d8fb       etcd-functional-351278
	654bd01629fde       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                      10 minutes ago      Running             kube-scheduler            3                   eb5586f63d27c       kube-scheduler-functional-351278
	d062b037dd6fb       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      10 minutes ago      Running             coredns                   2                   16b9b7066729d       coredns-66bc5c9577-kcks9
	48123667739fa       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      10 minutes ago      Exited              etcd                      2                   dfc27b667d8fb       etcd-functional-351278
	18c2d509c5728       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       3                   179e1cbc0b9cb       storage-provisioner
	eb8f63dfd7cc4       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                      10 minutes ago      Exited              kube-proxy                2                   26fc8596bfe94       kube-proxy-gnc2m
	a41d060c833a0       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                      10 minutes ago      Exited              kube-scheduler            2                   eb5586f63d27c       kube-scheduler-functional-351278
	10c598550cc09       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                      10 minutes ago      Exited              kube-controller-manager   2                   50d153de402bb       kube-controller-manager-functional-351278
	9732f6304c59d       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 minutes ago      Exited              coredns                   1                   6467ca6fcd998       coredns-66bc5c9577-kcks9
	
	
	==> coredns [9732f6304c59d282334ca0d28d637dd10341a1c86fd79880df7b7519a377526e] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51843 - 42718 "HINFO IN 3262367968470923463.4546114080195651747. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.087807319s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [d062b037dd6fb189144a98d2de6f1c0d776f2e6505f52278691cb46e10a23d74] <==
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37152 - 37023 "HINFO IN 8047511030750463407.6215404466120594246. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.460804898s
	
	
	==> describe nodes <==
	Name:               functional-351278
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-351278
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=functional-351278
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_19T22_22_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:22:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-351278
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:34:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:33:04 +0000   Fri, 19 Sep 2025 22:22:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:33:04 +0000   Fri, 19 Sep 2025 22:22:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:33:04 +0000   Fri, 19 Sep 2025 22:22:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:33:04 +0000   Fri, 19 Sep 2025 22:22:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.95
	  Hostname:    functional-351278
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008584Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008584Ki
	  pods:               110
	System Info:
	  Machine ID:                 e9fbec12c9cc4da5a4819f98a915eadd
	  System UUID:                e9fbec12-c9cc-4da5-a481-9f98a915eadd
	  Boot ID:                    0ec8fcd8-9c60-4abf-860c-295d8944fa7f
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-hxq2h                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m55s
	  default                     hello-node-connect-7d85dfc575-47ht6           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m59s
	  default                     mysql-5bb876957f-2hghz                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m54s
	  kube-system                 coredns-66bc5c9577-kcks9                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 etcd-functional-351278                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-functional-351278              250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-351278     200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-gnc2m                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-351278              100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-wzzm4    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m9s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-htp94         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node functional-351278 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node functional-351278 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node functional-351278 status is now: NodeHasSufficientPID
	  Normal  NodeReady                12m                kubelet          Node functional-351278 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           12m                node-controller  Node functional-351278 event: Registered Node functional-351278 in Controller
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node functional-351278 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-351278 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-351278 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           11m                node-controller  Node functional-351278 event: Registered Node functional-351278 in Controller
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-351278 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-351278 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node functional-351278 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           10m                node-controller  Node functional-351278 event: Registered Node functional-351278 in Controller
	
	
	==> dmesg <==
	[Sep19 22:22] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000048] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.000519] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.191929] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000020] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.089286] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.095437] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.128311] kauditd_printk_skb: 171 callbacks suppressed
	[  +1.053733] kauditd_printk_skb: 18 callbacks suppressed
	[Sep19 22:23] kauditd_printk_skb: 220 callbacks suppressed
	[  +0.109627] kauditd_printk_skb: 11 callbacks suppressed
	[  +4.558363] kauditd_printk_skb: 243 callbacks suppressed
	[Sep19 22:24] kauditd_printk_skb: 137 callbacks suppressed
	[  +0.111375] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.310472] kauditd_printk_skb: 78 callbacks suppressed
	[  +0.126850] kauditd_printk_skb: 272 callbacks suppressed
	[  +5.561032] kauditd_printk_skb: 102 callbacks suppressed
	[  +4.743320] kauditd_printk_skb: 62 callbacks suppressed
	[  +0.048897] kauditd_printk_skb: 90 callbacks suppressed
	[Sep19 22:25] kauditd_printk_skb: 65 callbacks suppressed
	[ +24.004450] kauditd_printk_skb: 74 callbacks suppressed
	[Sep19 22:32] kauditd_printk_skb: 26 callbacks suppressed
	[  +7.450214] kauditd_printk_skb: 25 callbacks suppressed
	
	
	==> etcd [48123667739fa5c6e4b0d2d5215d09e671990614ccd0d3d4fc54cf1ff733e76f] <==
	{"level":"warn","ts":"2025-09-19T22:24:26.076069Z","caller":"embed/config.go:1209","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2025-09-19T22:24:26.076081Z","caller":"embed/etcd.go:138","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.39.95:2380"]}
	{"level":"info","ts":"2025-09-19T22:24:26.076113Z","caller":"embed/etcd.go:544","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-09-19T22:24:26.076644Z","caller":"embed/etcd.go:146","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.95:2379"]}
	{"level":"info","ts":"2025-09-19T22:24:26.076833Z","caller":"embed/etcd.go:323","msg":"starting an etcd server","etcd-version":"3.6.4","git-sha":"5400cdc","go-version":"go1.23.11","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"functional-351278","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.39.95:2380"],"listen-peer-urls":["https://192.168.39.95:2380"],"advertise-client-urls":["https://192.168.39.95:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.95:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"experimental-local-address":"","cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cl
uster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"feature-gates":"InitialCorruptCheck=true","initial-corrupt-check":false,"corrupt-check-time-interval":"0s","compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","discovery-token":"","discovery-endpoints":"","discovery-dial-timeout":"2s","discovery-request-timeout":"5s","discovery-keepalive-time":"2s","discovery-keepalive-timeout":"6s","discovery-insecure-transport":true,"discovery-insecure-skip-tls-verify":false,"discovery-cert":"","discovery-key":"","discovery-cacert":"","discovery-user":"","downgrade-check-interval":"5s","max-learners":1,"v2-deprecation":"write-only"}
	{"level":"info","ts":"2025-09-19T22:24:26.077611Z","logger":"bbolt","caller":"backend/backend.go:203","msg":"Opening db file (/var/lib/minikube/etcd/member/snap/db) with mode -rw------- and with options: {Timeout: 0s, NoGrowSync: false, NoFreelistSync: true, PreLoadFreelist: false, FreelistType: hashmap, ReadOnly: false, MmapFlags: 8000, InitialMmapSize: 10737418240, PageSize: 0, NoSync: false, OpenFile: 0x0, Mlock: false, Logger: 0xc000121390}"}
	{"level":"info","ts":"2025-09-19T22:24:26.095385Z","logger":"bbolt","caller":"bbolt@v1.4.2/db.go:321","msg":"Opening bbolt db (/var/lib/minikube/etcd/member/snap/db) successfully"}
	{"level":"info","ts":"2025-09-19T22:24:26.095456Z","caller":"storage/backend.go:80","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"17.930871ms"}
	{"level":"info","ts":"2025-09-19T22:24:26.095504Z","caller":"etcdserver/bootstrap.go:220","msg":"restore consistentIndex","index":567}
	{"level":"info","ts":"2025-09-19T22:24:26.119624Z","caller":"etcdserver/bootstrap.go:441","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2025-09-19T22:24:26.122969Z","caller":"etcdserver/bootstrap.go:232","msg":"recovered v3 backend","backend-size-bytes":1179648,"backend-size":"1.2 MB","backend-size-in-use-bytes":1134592,"backend-size-in-use":"1.1 MB"}
	{"level":"info","ts":"2025-09-19T22:24:26.123965Z","caller":"etcdserver/bootstrap.go:90","msg":"Bootstrapping WAL from snapshot"}
	{"level":"info","ts":"2025-09-19T22:24:26.160224Z","caller":"etcdserver/bootstrap.go:599","msg":"restarting local member","cluster-id":"986e33f48d4d13ba","local-member-id":"a71e7bac075997","commit-index":567}
	{"level":"info","ts":"2025-09-19T22:24:26.162606Z","caller":"etcdserver/bootstrap.go:94","msg":"bootstrapping cluster"}
	{"level":"info","ts":"2025-09-19T22:24:26.164048Z","caller":"etcdserver/bootstrap.go:101","msg":"bootstrapping storage"}
	{"level":"info","ts":"2025-09-19T22:24:26.175855Z","caller":"membership/cluster.go:605","msg":"Detected member only in v3store but missing in v2store","member":"{ID:a71e7bac075997 RaftAttributes:{PeerURLs:[https://192.168.39.95:2380] IsLearner:false} Attributes:{Name:functional-351278 ClientURLs:[https://192.168.39.95:2379]}}"}
	{"level":"info","ts":"2025-09-19T22:24:26.178255Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2025-09-19T22:24:26.178279Z","caller":"membership/cluster.go:297","msg":"recovered/added member from store","cluster-id":"986e33f48d4d13ba","local-member-id":"a71e7bac075997","recovered-remote-peer-id":"a71e7bac075997","recovered-remote-peer-urls":["https://192.168.39.95:2380"],"recovered-remote-peer-is-learner":false}
	{"level":"info","ts":"2025-09-19T22:24:26.178430Z","caller":"membership/cluster.go:307","msg":"set cluster version from store","cluster-version":"3.6"}
	{"level":"info","ts":"2025-09-19T22:24:26.178446Z","caller":"etcdserver/bootstrap.go:109","msg":"bootstrapping raft"}
	{"level":"info","ts":"2025-09-19T22:24:26.181063Z","caller":"etcdserver/server.go:312","msg":"bootstrap successfully"}
	{"level":"info","ts":"2025-09-19T22:24:26.182197Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"a71e7bac075997 switched to configuration voters=()"}
	{"level":"info","ts":"2025-09-19T22:24:26.182232Z","logger":"raft","caller":"v3@v3.6.0/raft.go:897","msg":"a71e7bac075997 became follower at term 3"}
	{"level":"info","ts":"2025-09-19T22:24:26.182240Z","logger":"raft","caller":"v3@v3.6.0/raft.go:493","msg":"newRaft a71e7bac075997 [peers: [], term: 3, commit: 567, applied: 0, lastindex: 567, lastterm: 3]"}
	{"level":"warn","ts":"2025-09-19T22:24:26.193088Z","caller":"auth/store.go:1135","msg":"simple token is not cryptographically signed"}
	
	
	==> etcd [ca9ceb7aad18b856b069a53fa36528acb133c0db8d9b22639ad02163b2a8d256] <==
	{"level":"warn","ts":"2025-09-19T22:24:32.996830Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:33.005341Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:33.017665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:33.035124Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:33.045969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:33.059781Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:33.082181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:33.102768Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:33.122632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:33.133996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:33.148518Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:33.159167Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:33.173818Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:33.188995Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:33.203658Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:33.215842Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:33.227397Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:33.262460Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:33.272505Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:33.305766Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:33.321449Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:24:33.361840Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56662","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-19T22:34:32.200434Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":985}
	{"level":"info","ts":"2025-09-19T22:34:32.216240Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":985,"took":"15.353883ms","hash":2834440057,"current-db-size-bytes":3338240,"current-db-size":"3.3 MB","current-db-size-in-use-bytes":3338240,"current-db-size-in-use":"3.3 MB"}
	{"level":"info","ts":"2025-09-19T22:34:32.216282Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2834440057,"revision":985,"compact-revision":-1}
	
	
	==> kernel <==
	 22:35:00 up 12 min,  0 users,  load average: 0.33, 0.41, 0.25
	Linux functional-351278 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Sep  9 02:24:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [6483900c50ab2f70850376c9f661c10fc004d33319d8bf76f53dbc28e3259c8b] <==
	I0919 22:24:39.692574       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0919 22:24:53.807975       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.101.225.20"}
	I0919 22:24:58.319626       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.106.163.45"}
	I0919 22:25:01.118840       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.102.109.72"}
	I0919 22:25:05.132702       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.99.132.69"}
	I0919 22:25:42.529501       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:25:57.346320       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:27:04.152951       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:27:06.674054       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:28:11.739177       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:28:34.539000       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:29:14.651185       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:29:40.362928       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:30:20.976132       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:30:47.885983       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:31:22.666809       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:31:47.895976       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:32:48.070414       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:32:51.478440       1 controller.go:667] quota admission added evaluator for: namespaces
	I0919 22:32:51.759720       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.83.102"}
	I0919 22:32:51.798748       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.48.82"}
	I0919 22:33:10.825978       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:34:07.268548       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:34:16.923720       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:34:34.169714       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [10c598550cc09fff82686c0933621cfc45b23d13c3546265f21a53cd4c27111f] <==
	I0919 22:24:26.606554       1 serving.go:386] Generated self-signed cert in-memory
	I0919 22:24:27.254668       1 controllermanager.go:191] "Starting" version="v1.34.0"
	I0919 22:24:27.254704       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 22:24:27.257310       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0919 22:24:27.257444       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0919 22:24:27.257720       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I0919 22:24:27.257745       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	
	
	==> kube-controller-manager [300bb2063f5a43edc6035ff49f65d585ce99e18457f8ac56a6d29b175cf12e8e] <==
	I0919 22:24:37.531489       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0919 22:24:37.531599       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0919 22:24:37.531619       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0919 22:24:37.538613       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0919 22:24:37.539827       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0919 22:24:37.544369       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0919 22:24:37.544487       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0919 22:24:37.544655       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0919 22:24:37.544722       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-351278"
	I0919 22:24:37.544777       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0919 22:24:37.545184       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0919 22:24:37.547281       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0919 22:24:37.550109       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0919 22:24:37.551543       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0919 22:24:37.559825       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0919 22:24:37.566126       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	E0919 22:32:51.577007       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0919 22:32:51.589036       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0919 22:32:51.610193       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0919 22:32:51.616342       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0919 22:32:51.628208       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0919 22:32:51.629041       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0919 22:32:51.639852       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0919 22:32:51.641267       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0919 22:32:51.647802       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [72cb2cb146fe5847510292057e63e40a96fd64e4f4a76d8696090e6a48ed4351] <==
	I0919 22:24:35.554555       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0919 22:24:35.656289       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0919 22:24:35.656600       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.95"]
	E0919 22:24:35.656975       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 22:24:35.712360       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0919 22:24:35.712449       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0919 22:24:35.712475       1 server_linux.go:132] "Using iptables Proxier"
	I0919 22:24:35.734784       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 22:24:35.735228       1 server.go:527] "Version info" version="v1.34.0"
	I0919 22:24:35.735258       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 22:24:35.737742       1 config.go:106] "Starting endpoint slice config controller"
	I0919 22:24:35.737851       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0919 22:24:35.743832       1 config.go:200] "Starting service config controller"
	I0919 22:24:35.744069       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0919 22:24:35.744808       1 config.go:403] "Starting serviceCIDR config controller"
	I0919 22:24:35.744830       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0919 22:24:35.745343       1 config.go:309] "Starting node config controller"
	I0919 22:24:35.745354       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0919 22:24:35.745359       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0919 22:24:35.838538       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0919 22:24:35.844699       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0919 22:24:35.845223       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [eb8f63dfd7cc4378b1a91b1e35eef735c40ccd151cff64e84d64aa5de2121aae] <==
	I0919 22:24:25.364860       1 server_linux.go:53] "Using iptables proxy"
	I0919 22:24:25.532215       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E0919 22:24:25.538045       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-351278&limit=500&resourceVersion=0\": dial tcp 192.168.39.95:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	
	
	==> kube-scheduler [654bd01629fdeca3657986cd26bfffe3f8a55afa36789f47b07bc6d11d84a780] <==
	I0919 22:24:33.397444       1 serving.go:386] Generated self-signed cert in-memory
	I0919 22:24:34.211621       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0919 22:24:34.211706       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 22:24:34.220243       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0919 22:24:34.220330       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I0919 22:24:34.220402       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 22:24:34.220431       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 22:24:34.220457       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0919 22:24:34.220473       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0919 22:24:34.220485       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0919 22:24:34.220560       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0919 22:24:34.321480       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0919 22:24:34.321537       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I0919 22:24:34.321583       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [a41d060c833a02a6404078af8ea2569c9f77f65721fd004704b5b3e80d50bd61] <==
	I0919 22:24:26.731449       1 serving.go:386] Generated self-signed cert in-memory
	W0919 22:24:27.211844       1 authentication.go:397] Error looking up in-cluster authentication configuration: Get "https://192.168.39.95:8441/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 192.168.39.95:8441: connect: connection refused
	W0919 22:24:27.212028       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0919 22:24:27.212120       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0919 22:24:27.235673       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0919 22:24:27.235715       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E0919 22:24:27.235731       1 event.go:401] "Unable start event watcher (will not retry!)" err="broadcaster already stopped"
	I0919 22:24:27.237834       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0919 22:24:27.238080       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0919 22:24:27.238148       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 22:24:27.238179       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0919 22:24:27.238622       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.39.95:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.95:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0919 22:24:27.238839       1 server.go:286] "handlers are not fully synchronized" err="context canceled"
	E0919 22:24:27.239533       1 shared_informer.go:352] "Unable to sync caches" logger="UnhandledError" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 22:24:27.239623       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0919 22:24:27.239664       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0919 22:24:27.239699       1 configmap_cafile_content.go:213] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 22:24:27.239717       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0919 22:24:27.239793       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0919 22:24:27.239824       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 19 22:34:07 functional-351278 kubelet[6067]: E0919 22:34:07.953859    6067 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-47ht6" podUID="3af4e468-7894-42f7-8bf3-23cebdaddd0c"
	Sep 19 22:34:10 functional-351278 kubelet[6067]: E0919 22:34:10.231736    6067 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321250231043474  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175576}  inodes_used:{value:87}}"
	Sep 19 22:34:10 functional-351278 kubelet[6067]: E0919 22:34:10.231756    6067 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321250231043474  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175576}  inodes_used:{value:87}}"
	Sep 19 22:34:17 functional-351278 kubelet[6067]: E0919 22:34:17.955834    6067 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-2hghz" podUID="e83df643-ba88-420d-ade1-6c4a474cd6fd"
	Sep 19 22:34:18 functional-351278 kubelet[6067]: E0919 22:34:18.952788    6067 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-47ht6" podUID="3af4e468-7894-42f7-8bf3-23cebdaddd0c"
	Sep 19 22:34:20 functional-351278 kubelet[6067]: E0919 22:34:20.233512    6067 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321260233210156  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175576}  inodes_used:{value:87}}"
	Sep 19 22:34:20 functional-351278 kubelet[6067]: E0919 22:34:20.233553    6067 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321260233210156  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175576}  inodes_used:{value:87}}"
	Sep 19 22:34:29 functional-351278 kubelet[6067]: E0919 22:34:29.957708    6067 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-47ht6" podUID="3af4e468-7894-42f7-8bf3-23cebdaddd0c"
	Sep 19 22:34:29 functional-351278 kubelet[6067]: E0919 22:34:29.960296    6067 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-2hghz" podUID="e83df643-ba88-420d-ade1-6c4a474cd6fd"
	Sep 19 22:34:30 functional-351278 kubelet[6067]: E0919 22:34:30.063808    6067 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod1d98d841-9889-4f5a-b46d-4c15a211c7e9/crio-6467ca6fcd998e43c87b7a92b68a257970d17335a3ee9c72c87257f3a6f6c0c5: Error finding container 6467ca6fcd998e43c87b7a92b68a257970d17335a3ee9c72c87257f3a6f6c0c5: Status 404 returned error can't find the container with id 6467ca6fcd998e43c87b7a92b68a257970d17335a3ee9c72c87257f3a6f6c0c5
	Sep 19 22:34:30 functional-351278 kubelet[6067]: E0919 22:34:30.235516    6067 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321270234655293  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175576}  inodes_used:{value:87}}"
	Sep 19 22:34:30 functional-351278 kubelet[6067]: E0919 22:34:30.235541    6067 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321270234655293  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175576}  inodes_used:{value:87}}"
	Sep 19 22:34:40 functional-351278 kubelet[6067]: E0919 22:34:40.237340    6067 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321280236808475  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175576}  inodes_used:{value:87}}"
	Sep 19 22:34:40 functional-351278 kubelet[6067]: E0919 22:34:40.237365    6067 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321280236808475  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175576}  inodes_used:{value:87}}"
	Sep 19 22:34:41 functional-351278 kubelet[6067]: E0919 22:34:41.953566    6067 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-47ht6" podUID="3af4e468-7894-42f7-8bf3-23cebdaddd0c"
	Sep 19 22:34:44 functional-351278 kubelet[6067]: E0919 22:34:44.858380    6067 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Sep 19 22:34:44 functional-351278 kubelet[6067]: E0919 22:34:44.858464    6067 kuberuntime_image.go:43] "Failed to pull image" err="fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Sep 19 22:34:44 functional-351278 kubelet[6067]: E0919 22:34:44.858751    6067 kuberuntime_manager.go:1449] "Unhandled Error" err="container echo-server start failed in pod hello-node-75c85bcc94-hxq2h_default(10da76bb-ac93-4179-85ff-f400a8350ff2): ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 19 22:34:44 functional-351278 kubelet[6067]: E0919 22:34:44.858789    6067 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-hxq2h" podUID="10da76bb-ac93-4179-85ff-f400a8350ff2"
	Sep 19 22:34:50 functional-351278 kubelet[6067]: E0919 22:34:50.238758    6067 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321290238335088  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175576}  inodes_used:{value:87}}"
	Sep 19 22:34:50 functional-351278 kubelet[6067]: E0919 22:34:50.238803    6067 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321290238335088  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175576}  inodes_used:{value:87}}"
	Sep 19 22:34:52 functional-351278 kubelet[6067]: E0919 22:34:52.953557    6067 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-47ht6" podUID="3af4e468-7894-42f7-8bf3-23cebdaddd0c"
	Sep 19 22:34:55 functional-351278 kubelet[6067]: E0919 22:34:55.955507    6067 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-hxq2h" podUID="10da76bb-ac93-4179-85ff-f400a8350ff2"
	Sep 19 22:35:00 functional-351278 kubelet[6067]: E0919 22:35:00.240618    6067 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321300240221582  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175576}  inodes_used:{value:87}}"
	Sep 19 22:35:00 functional-351278 kubelet[6067]: E0919 22:35:00.240662    6067 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321300240221582  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:175576}  inodes_used:{value:87}}"
	
	
	==> storage-provisioner [18c2d509c5728590a77ee037c7f0994bafee35e8bf16b51258dee0762a0a53b1] <==
	I0919 22:24:25.563398       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0919 22:24:25.566735       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [f830720c30812bcf9aa33ac20c632924b1c7120a65fd62b5dcfd937a7ae0e617] <==
	W0919 22:34:36.240166       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:34:38.244076       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:34:38.249442       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:34:40.253422       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:34:40.262801       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:34:42.266612       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:34:42.275747       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:34:44.280040       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:34:44.286852       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:34:46.291018       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:34:46.296225       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:34:48.299843       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:34:48.309830       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:34:50.313669       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:34:50.318804       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:34:52.322832       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:34:52.329843       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:34:54.334250       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:34:54.339754       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:34:56.343482       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:34:56.349052       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:34:58.352648       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:34:58.361999       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:35:00.370317       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:35:00.381859       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-351278 -n functional-351278
helpers_test.go:269: (dbg) Run:  kubectl --context functional-351278 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-hxq2h hello-node-connect-7d85dfc575-47ht6 mysql-5bb876957f-2hghz sp-pod dashboard-metrics-scraper-77bf4d6c4c-wzzm4 kubernetes-dashboard-855c9754f9-htp94
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/MySQL]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-351278 describe pod busybox-mount hello-node-75c85bcc94-hxq2h hello-node-connect-7d85dfc575-47ht6 mysql-5bb876957f-2hghz sp-pod dashboard-metrics-scraper-77bf4d6c4c-wzzm4 kubernetes-dashboard-855c9754f9-htp94
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-351278 describe pod busybox-mount hello-node-75c85bcc94-hxq2h hello-node-connect-7d85dfc575-47ht6 mysql-5bb876957f-2hghz sp-pod dashboard-metrics-scraper-77bf4d6c4c-wzzm4 kubernetes-dashboard-855c9754f9-htp94: exit status 1 (109.423121ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-351278/192.168.39.95
	Start Time:       Fri, 19 Sep 2025 22:31:10 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  mount-munger:
	    Container ID:  cri-o://5a787edfb8f9c734d3f5475054443381d7a2a79092acbf9a206f87afe5b0a0c5
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Fri, 19 Sep 2025 22:32:42 +0000
	      Finished:     Fri, 19 Sep 2025 22:32:42 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-n59bf (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-n59bf:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  3m50s  default-scheduler  Successfully assigned default/busybox-mount to functional-351278
	  Normal  Pulling    3m51s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     2m19s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.33s (1m31.478s including waiting). Image size: 4631262 bytes.
	  Normal  Created    2m19s  kubelet            Created container: mount-munger
	  Normal  Started    2m19s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-hxq2h
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-351278/192.168.39.95
	Start Time:       Fri, 19 Sep 2025 22:25:05 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6jtdq (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-6jtdq:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m56s                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-hxq2h to functional-351278
	  Warning  Failed     3m22s (x3 over 7m59s)  kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    2m31s (x4 over 9m56s)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     17s (x4 over 7m59s)    kubelet            Error: ErrImagePull
	  Warning  Failed     17s                    kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    6s (x6 over 7m59s)     kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     6s (x6 over 7m59s)     kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-47ht6
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-351278/192.168.39.95
	Start Time:       Fri, 19 Sep 2025 22:25:01 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:           10.244.0.7
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-s6kzd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-s6kzd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-47ht6 to functional-351278
	  Warning  Failed     8m30s                kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    3m3s (x4 over 10m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     78s (x4 over 8m30s)  kubelet            Error: ErrImagePull
	  Warning  Failed     78s (x3 over 5m57s)  kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    9s (x11 over 8m30s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     9s (x11 over 8m30s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             mysql-5bb876957f-2hghz
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-351278/192.168.39.95
	Start Time:       Fri, 19 Sep 2025 22:24:58 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vjr59 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-vjr59:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/mysql-5bb876957f-2hghz to functional-351278
	  Warning  Failed     9m31s                  kubelet            Failed to pull image "docker.io/mysql:5.7": copying system image from manifest list: determining manifest MIME type for docker://mysql:5.7: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     4m23s (x2 over 6m28s)  kubelet            Failed to pull image "docker.io/mysql:5.7": fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     108s (x4 over 9m31s)   kubelet            Error: ErrImagePull
	  Warning  Failed     108s                   kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    32s (x11 over 9m31s)   kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     32s (x11 over 9m31s)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling    20s (x5 over 10m)      kubelet            Pulling image "docker.io/mysql:5.7"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-351278/192.168.39.95
	Start Time:       Fri, 19 Sep 2025 22:25:06 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zqhg9 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-zqhg9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m54s                  default-scheduler  Successfully assigned default/sp-pod to functional-351278
	  Warning  Failed     4m56s (x2 over 7m29s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m20s (x3 over 7m29s)  kubelet            Error: ErrImagePull
	  Warning  Failed     2m20s                  kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:27637a97e3d1d0518adc2a877b60db3779970f19474b6e586ddcbc2d5500e285 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    115s (x4 over 7m28s)   kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     115s (x4 over 7m28s)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling    100s (x4 over 9m55s)   kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-wzzm4" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-htp94" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-351278 describe pod busybox-mount hello-node-75c85bcc94-hxq2h hello-node-connect-7d85dfc575-47ht6 mysql-5bb876957f-2hghz sp-pod dashboard-metrics-scraper-77bf4d6c4c-wzzm4 kubernetes-dashboard-855c9754f9-htp94: exit status 1
--- FAIL: TestFunctional/parallel/MySQL (602.94s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-351278 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-351278 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-hxq2h" [10da76bb-ac93-4179-85ff-f400a8350ff2] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-351278 -n functional-351278
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-09-19 22:35:05.428858458 +0000 UTC m=+1260.826832655
functional_test.go:1460: (dbg) Run:  kubectl --context functional-351278 describe po hello-node-75c85bcc94-hxq2h -n default
functional_test.go:1460: (dbg) kubectl --context functional-351278 describe po hello-node-75c85bcc94-hxq2h -n default:
Name:             hello-node-75c85bcc94-hxq2h
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-351278/192.168.39.95
Start Time:       Fri, 19 Sep 2025 22:25:05 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ErrImagePull
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6jtdq (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-6jtdq:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-hxq2h to functional-351278
Warning  Failed     3m26s (x3 over 8m3s)  kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    2m35s (x4 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     21s (x4 over 8m3s)    kubelet            Error: ErrImagePull
Warning  Failed     21s                   kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    10s (x6 over 8m3s)    kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     10s (x6 over 8m3s)    kubelet            Error: ImagePullBackOff
functional_test.go:1460: (dbg) Run:  kubectl --context functional-351278 logs hello-node-75c85bcc94-hxq2h -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-351278 logs hello-node-75c85bcc94-hxq2h -n default: exit status 1 (64.788566ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-hxq2h" is waiting to start: image can't be pulled

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-351278 logs hello-node-75c85bcc94-hxq2h -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-351278 service --namespace=default --https --url hello-node: exit status 115 (294.177468ms)

                                                
                                                
-- stdout --
	https://192.168.39.95:30436
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-351278 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-351278 service hello-node --url --format={{.IP}}: exit status 115 (293.984594ms)

                                                
                                                
-- stdout --
	192.168.39.95
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-351278 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-351278 service hello-node --url: exit status 115 (298.812382ms)

                                                
                                                
-- stdout --
	http://192.168.39.95:30436
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-351278 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.95:30436
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.30s)

                                                
                                    
x
+
TestPreload (154.05s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-227745 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.32.0
E0919 23:19:58.388614   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/functional-351278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-227745 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.32.0: (1m33.939030421s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-227745 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-227745 image pull gcr.io/k8s-minikube/busybox: (1.443246368s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-227745
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-227745: (7.001662276s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-227745 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-227745 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (48.613628231s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-227745 image list
preload_test.go:75: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10
	registry.k8s.io/kube-scheduler:v1.32.0
	registry.k8s.io/kube-proxy:v1.32.0
	registry.k8s.io/kube-controller-manager:v1.32.0
	registry.k8s.io/kube-apiserver:v1.32.0
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20241108-5c6d2daf

                                                
                                                
-- /stdout --
panic.go:636: *** TestPreload FAILED at 2025-09-19 23:20:59.778235229 +0000 UTC m=+4015.176209435
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-227745 -n test-preload-227745
helpers_test.go:252: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-227745 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p test-preload-227745 logs -n 25: (1.179094042s)
helpers_test.go:260: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                        ARGS                                                                                         │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-337202 ssh -n multinode-337202-m03 sudo cat /home/docker/cp-test.txt                                                                                                      │ multinode-337202     │ jenkins │ v1.37.0 │ 19 Sep 25 23:07 UTC │ 19 Sep 25 23:07 UTC │
	│ ssh     │ multinode-337202 ssh -n multinode-337202 sudo cat /home/docker/cp-test_multinode-337202-m03_multinode-337202.txt                                                                    │ multinode-337202     │ jenkins │ v1.37.0 │ 19 Sep 25 23:07 UTC │ 19 Sep 25 23:07 UTC │
	│ cp      │ multinode-337202 cp multinode-337202-m03:/home/docker/cp-test.txt multinode-337202-m02:/home/docker/cp-test_multinode-337202-m03_multinode-337202-m02.txt                           │ multinode-337202     │ jenkins │ v1.37.0 │ 19 Sep 25 23:07 UTC │ 19 Sep 25 23:07 UTC │
	│ ssh     │ multinode-337202 ssh -n multinode-337202-m03 sudo cat /home/docker/cp-test.txt                                                                                                      │ multinode-337202     │ jenkins │ v1.37.0 │ 19 Sep 25 23:07 UTC │ 19 Sep 25 23:07 UTC │
	│ ssh     │ multinode-337202 ssh -n multinode-337202-m02 sudo cat /home/docker/cp-test_multinode-337202-m03_multinode-337202-m02.txt                                                            │ multinode-337202     │ jenkins │ v1.37.0 │ 19 Sep 25 23:07 UTC │ 19 Sep 25 23:07 UTC │
	│ node    │ multinode-337202 node stop m03                                                                                                                                                      │ multinode-337202     │ jenkins │ v1.37.0 │ 19 Sep 25 23:07 UTC │ 19 Sep 25 23:07 UTC │
	│ node    │ multinode-337202 node start m03 -v=5 --alsologtostderr                                                                                                                              │ multinode-337202     │ jenkins │ v1.37.0 │ 19 Sep 25 23:07 UTC │ 19 Sep 25 23:08 UTC │
	│ node    │ list -p multinode-337202                                                                                                                                                            │ multinode-337202     │ jenkins │ v1.37.0 │ 19 Sep 25 23:08 UTC │                     │
	│ stop    │ -p multinode-337202                                                                                                                                                                 │ multinode-337202     │ jenkins │ v1.37.0 │ 19 Sep 25 23:08 UTC │ 19 Sep 25 23:10 UTC │
	│ start   │ -p multinode-337202 --wait=true -v=5 --alsologtostderr                                                                                                                              │ multinode-337202     │ jenkins │ v1.37.0 │ 19 Sep 25 23:10 UTC │ 19 Sep 25 23:13 UTC │
	│ node    │ list -p multinode-337202                                                                                                                                                            │ multinode-337202     │ jenkins │ v1.37.0 │ 19 Sep 25 23:13 UTC │                     │
	│ node    │ multinode-337202 node delete m03                                                                                                                                                    │ multinode-337202     │ jenkins │ v1.37.0 │ 19 Sep 25 23:13 UTC │ 19 Sep 25 23:13 UTC │
	│ stop    │ multinode-337202 stop                                                                                                                                                               │ multinode-337202     │ jenkins │ v1.37.0 │ 19 Sep 25 23:13 UTC │ 19 Sep 25 23:16 UTC │
	│ start   │ -p multinode-337202 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                          │ multinode-337202     │ jenkins │ v1.37.0 │ 19 Sep 25 23:16 UTC │ 19 Sep 25 23:17 UTC │
	│ node    │ list -p multinode-337202                                                                                                                                                            │ multinode-337202     │ jenkins │ v1.37.0 │ 19 Sep 25 23:17 UTC │                     │
	│ start   │ -p multinode-337202-m02 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                         │ multinode-337202-m02 │ jenkins │ v1.37.0 │ 19 Sep 25 23:17 UTC │                     │
	│ start   │ -p multinode-337202-m03 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                         │ multinode-337202-m03 │ jenkins │ v1.37.0 │ 19 Sep 25 23:17 UTC │ 19 Sep 25 23:18 UTC │
	│ node    │ add -p multinode-337202                                                                                                                                                             │ multinode-337202     │ jenkins │ v1.37.0 │ 19 Sep 25 23:18 UTC │                     │
	│ delete  │ -p multinode-337202-m03                                                                                                                                                             │ multinode-337202-m03 │ jenkins │ v1.37.0 │ 19 Sep 25 23:18 UTC │ 19 Sep 25 23:18 UTC │
	│ delete  │ -p multinode-337202                                                                                                                                                                 │ multinode-337202     │ jenkins │ v1.37.0 │ 19 Sep 25 23:18 UTC │ 19 Sep 25 23:18 UTC │
	│ start   │ -p test-preload-227745 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.32.0 │ test-preload-227745  │ jenkins │ v1.37.0 │ 19 Sep 25 23:18 UTC │ 19 Sep 25 23:20 UTC │
	│ image   │ test-preload-227745 image pull gcr.io/k8s-minikube/busybox                                                                                                                          │ test-preload-227745  │ jenkins │ v1.37.0 │ 19 Sep 25 23:20 UTC │ 19 Sep 25 23:20 UTC │
	│ stop    │ -p test-preload-227745                                                                                                                                                              │ test-preload-227745  │ jenkins │ v1.37.0 │ 19 Sep 25 23:20 UTC │ 19 Sep 25 23:20 UTC │
	│ start   │ -p test-preload-227745 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                         │ test-preload-227745  │ jenkins │ v1.37.0 │ 19 Sep 25 23:20 UTC │ 19 Sep 25 23:20 UTC │
	│ image   │ test-preload-227745 image list                                                                                                                                                      │ test-preload-227745  │ jenkins │ v1.37.0 │ 19 Sep 25 23:20 UTC │ 19 Sep 25 23:20 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/19 23:20:10
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 23:20:10.983460   53006 out.go:360] Setting OutFile to fd 1 ...
	I0919 23:20:10.983769   53006 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 23:20:10.983780   53006 out.go:374] Setting ErrFile to fd 2...
	I0919 23:20:10.983801   53006 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 23:20:10.983995   53006 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14764/.minikube/bin
	I0919 23:20:10.984473   53006 out.go:368] Setting JSON to false
	I0919 23:20:10.985425   53006 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":7338,"bootTime":1758316673,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 23:20:10.985518   53006 start.go:140] virtualization: kvm guest
	I0919 23:20:10.987414   53006 out.go:179] * [test-preload-227745] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0919 23:20:10.988671   53006 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 23:20:10.988712   53006 notify.go:220] Checking for updates...
	I0919 23:20:10.990679   53006 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 23:20:10.991870   53006 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-14764/kubeconfig
	I0919 23:20:10.993079   53006 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14764/.minikube
	I0919 23:20:10.994163   53006 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 23:20:10.995231   53006 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 23:20:10.996644   53006 config.go:182] Loaded profile config "test-preload-227745": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0919 23:20:10.997027   53006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 23:20:10.997098   53006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 23:20:11.010253   53006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32781
	I0919 23:20:11.010753   53006 main.go:141] libmachine: () Calling .GetVersion
	I0919 23:20:11.011274   53006 main.go:141] libmachine: Using API Version  1
	I0919 23:20:11.011297   53006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 23:20:11.011809   53006 main.go:141] libmachine: () Calling .GetMachineName
	I0919 23:20:11.012025   53006 main.go:141] libmachine: (test-preload-227745) Calling .DriverName
	I0919 23:20:11.014430   53006 out.go:179] * Kubernetes 1.34.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.0
	I0919 23:20:11.015616   53006 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 23:20:11.016057   53006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 23:20:11.016107   53006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 23:20:11.029424   53006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41733
	I0919 23:20:11.030011   53006 main.go:141] libmachine: () Calling .GetVersion
	I0919 23:20:11.030542   53006 main.go:141] libmachine: Using API Version  1
	I0919 23:20:11.030562   53006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 23:20:11.030952   53006 main.go:141] libmachine: () Calling .GetMachineName
	I0919 23:20:11.031150   53006 main.go:141] libmachine: (test-preload-227745) Calling .DriverName
	I0919 23:20:11.066115   53006 out.go:179] * Using the kvm2 driver based on existing profile
	I0919 23:20:11.068239   53006 start.go:304] selected driver: kvm2
	I0919 23:20:11.068258   53006 start.go:918] validating driver "kvm2" against &{Name:test-preload-227745 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.37.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-227745
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.242 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:d
ocker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 23:20:11.068336   53006 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 23:20:11.069030   53006 install.go:51] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 23:20:11.069117   53006 install.go:123] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21594-14764/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0919 23:20:11.083570   53006 install.go:134] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is {Version:v1.37.0 Commit:6e37ee63f758843bb5fe33c3a528c564c4b83d53}
	I0919 23:20:11.083600   53006 install.go:123] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21594-14764/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0919 23:20:11.098047   53006 install.go:134] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is {Version:v1.37.0 Commit:6e37ee63f758843bb5fe33c3a528c564c4b83d53}
	I0919 23:20:11.098392   53006 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 23:20:11.098418   53006 cni.go:84] Creating CNI manager for ""
	I0919 23:20:11.098488   53006 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0919 23:20:11.098555   53006 start.go:348] cluster config:
	{Name:test-preload-227745 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.37.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-227745 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.242 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCor
eDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 23:20:11.098665   53006 iso.go:125] acquiring lock: {Name:mk21ede999fca7478b081d3e470ef3cc88b140f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 23:20:11.100436   53006 out.go:179] * Starting "test-preload-227745" primary control-plane node in "test-preload-227745" cluster
	I0919 23:20:11.101627   53006 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0919 23:20:11.128613   53006 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I0919 23:20:11.128663   53006 cache.go:58] Caching tarball of preloaded images
	I0919 23:20:11.128852   53006 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0919 23:20:11.130461   53006 out.go:179] * Downloading Kubernetes v1.32.0 preload ...
	I0919 23:20:11.131582   53006 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 ...
	I0919 23:20:11.172082   53006 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2acdb4dde52794f2167c79dcee7507ae -> /home/jenkins/minikube-integration/21594-14764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I0919 23:20:13.766653   53006 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 ...
	I0919 23:20:13.766786   53006 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/21594-14764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 ...
	I0919 23:20:14.506341   53006 cache.go:61] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I0919 23:20:14.506463   53006 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/test-preload-227745/config.json ...
	I0919 23:20:14.506721   53006 start.go:360] acquireMachinesLock for test-preload-227745: {Name:mke6cd936cf5da66e4fbcd4dcd8a2d3d3cae6c7b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 23:20:14.506806   53006 start.go:364] duration metric: took 44.768µs to acquireMachinesLock for "test-preload-227745"
	I0919 23:20:14.506822   53006 start.go:96] Skipping create...Using existing machine configuration
	I0919 23:20:14.506831   53006 fix.go:54] fixHost starting: 
	I0919 23:20:14.507099   53006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 23:20:14.507133   53006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 23:20:14.520502   53006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40755
	I0919 23:20:14.521116   53006 main.go:141] libmachine: () Calling .GetVersion
	I0919 23:20:14.521622   53006 main.go:141] libmachine: Using API Version  1
	I0919 23:20:14.521649   53006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 23:20:14.521987   53006 main.go:141] libmachine: () Calling .GetMachineName
	I0919 23:20:14.522165   53006 main.go:141] libmachine: (test-preload-227745) Calling .DriverName
	I0919 23:20:14.522317   53006 main.go:141] libmachine: (test-preload-227745) Calling .GetState
	I0919 23:20:14.524214   53006 fix.go:112] recreateIfNeeded on test-preload-227745: state=Stopped err=<nil>
	I0919 23:20:14.524246   53006 main.go:141] libmachine: (test-preload-227745) Calling .DriverName
	W0919 23:20:14.524406   53006 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 23:20:14.526335   53006 out.go:252] * Restarting existing kvm2 VM for "test-preload-227745" ...
	I0919 23:20:14.526376   53006 main.go:141] libmachine: (test-preload-227745) Calling .Start
	I0919 23:20:14.526553   53006 main.go:141] libmachine: (test-preload-227745) starting domain...
	I0919 23:20:14.526571   53006 main.go:141] libmachine: (test-preload-227745) ensuring networks are active...
	I0919 23:20:14.527368   53006 main.go:141] libmachine: (test-preload-227745) Ensuring network default is active
	I0919 23:20:14.527737   53006 main.go:141] libmachine: (test-preload-227745) Ensuring network mk-test-preload-227745 is active
	I0919 23:20:14.528126   53006 main.go:141] libmachine: (test-preload-227745) getting domain XML...
	I0919 23:20:14.529109   53006 main.go:141] libmachine: (test-preload-227745) DBG | starting domain XML:
	I0919 23:20:14.529137   53006 main.go:141] libmachine: (test-preload-227745) DBG | <domain type='kvm'>
	I0919 23:20:14.529149   53006 main.go:141] libmachine: (test-preload-227745) DBG |   <name>test-preload-227745</name>
	I0919 23:20:14.529162   53006 main.go:141] libmachine: (test-preload-227745) DBG |   <uuid>71161acb-9fd6-4a19-b8a9-4d69541ffe74</uuid>
	I0919 23:20:14.529173   53006 main.go:141] libmachine: (test-preload-227745) DBG |   <memory unit='KiB'>3145728</memory>
	I0919 23:20:14.529182   53006 main.go:141] libmachine: (test-preload-227745) DBG |   <currentMemory unit='KiB'>3145728</currentMemory>
	I0919 23:20:14.529200   53006 main.go:141] libmachine: (test-preload-227745) DBG |   <vcpu placement='static'>2</vcpu>
	I0919 23:20:14.529224   53006 main.go:141] libmachine: (test-preload-227745) DBG |   <os>
	I0919 23:20:14.529232   53006 main.go:141] libmachine: (test-preload-227745) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I0919 23:20:14.529237   53006 main.go:141] libmachine: (test-preload-227745) DBG |     <boot dev='cdrom'/>
	I0919 23:20:14.529243   53006 main.go:141] libmachine: (test-preload-227745) DBG |     <boot dev='hd'/>
	I0919 23:20:14.529248   53006 main.go:141] libmachine: (test-preload-227745) DBG |     <bootmenu enable='no'/>
	I0919 23:20:14.529253   53006 main.go:141] libmachine: (test-preload-227745) DBG |   </os>
	I0919 23:20:14.529260   53006 main.go:141] libmachine: (test-preload-227745) DBG |   <features>
	I0919 23:20:14.529267   53006 main.go:141] libmachine: (test-preload-227745) DBG |     <acpi/>
	I0919 23:20:14.529274   53006 main.go:141] libmachine: (test-preload-227745) DBG |     <apic/>
	I0919 23:20:14.529302   53006 main.go:141] libmachine: (test-preload-227745) DBG |     <pae/>
	I0919 23:20:14.529328   53006 main.go:141] libmachine: (test-preload-227745) DBG |   </features>
	I0919 23:20:14.529346   53006 main.go:141] libmachine: (test-preload-227745) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I0919 23:20:14.529360   53006 main.go:141] libmachine: (test-preload-227745) DBG |   <clock offset='utc'/>
	I0919 23:20:14.529373   53006 main.go:141] libmachine: (test-preload-227745) DBG |   <on_poweroff>destroy</on_poweroff>
	I0919 23:20:14.529384   53006 main.go:141] libmachine: (test-preload-227745) DBG |   <on_reboot>restart</on_reboot>
	I0919 23:20:14.529396   53006 main.go:141] libmachine: (test-preload-227745) DBG |   <on_crash>destroy</on_crash>
	I0919 23:20:14.529406   53006 main.go:141] libmachine: (test-preload-227745) DBG |   <devices>
	I0919 23:20:14.529418   53006 main.go:141] libmachine: (test-preload-227745) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I0919 23:20:14.529433   53006 main.go:141] libmachine: (test-preload-227745) DBG |     <disk type='file' device='cdrom'>
	I0919 23:20:14.529443   53006 main.go:141] libmachine: (test-preload-227745) DBG |       <driver name='qemu' type='raw'/>
	I0919 23:20:14.529458   53006 main.go:141] libmachine: (test-preload-227745) DBG |       <source file='/home/jenkins/minikube-integration/21594-14764/.minikube/machines/test-preload-227745/boot2docker.iso'/>
	I0919 23:20:14.529494   53006 main.go:141] libmachine: (test-preload-227745) DBG |       <target dev='hdc' bus='scsi'/>
	I0919 23:20:14.529551   53006 main.go:141] libmachine: (test-preload-227745) DBG |       <readonly/>
	I0919 23:20:14.529571   53006 main.go:141] libmachine: (test-preload-227745) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I0919 23:20:14.529582   53006 main.go:141] libmachine: (test-preload-227745) DBG |     </disk>
	I0919 23:20:14.529593   53006 main.go:141] libmachine: (test-preload-227745) DBG |     <disk type='file' device='disk'>
	I0919 23:20:14.529606   53006 main.go:141] libmachine: (test-preload-227745) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I0919 23:20:14.529623   53006 main.go:141] libmachine: (test-preload-227745) DBG |       <source file='/home/jenkins/minikube-integration/21594-14764/.minikube/machines/test-preload-227745/test-preload-227745.rawdisk'/>
	I0919 23:20:14.529635   53006 main.go:141] libmachine: (test-preload-227745) DBG |       <target dev='hda' bus='virtio'/>
	I0919 23:20:14.529652   53006 main.go:141] libmachine: (test-preload-227745) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I0919 23:20:14.529662   53006 main.go:141] libmachine: (test-preload-227745) DBG |     </disk>
	I0919 23:20:14.529674   53006 main.go:141] libmachine: (test-preload-227745) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I0919 23:20:14.529694   53006 main.go:141] libmachine: (test-preload-227745) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I0919 23:20:14.529707   53006 main.go:141] libmachine: (test-preload-227745) DBG |     </controller>
	I0919 23:20:14.529717   53006 main.go:141] libmachine: (test-preload-227745) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I0919 23:20:14.529740   53006 main.go:141] libmachine: (test-preload-227745) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I0919 23:20:14.529756   53006 main.go:141] libmachine: (test-preload-227745) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I0919 23:20:14.529777   53006 main.go:141] libmachine: (test-preload-227745) DBG |     </controller>
	I0919 23:20:14.529800   53006 main.go:141] libmachine: (test-preload-227745) DBG |     <interface type='network'>
	I0919 23:20:14.529835   53006 main.go:141] libmachine: (test-preload-227745) DBG |       <mac address='52:54:00:41:4e:37'/>
	I0919 23:20:14.529855   53006 main.go:141] libmachine: (test-preload-227745) DBG |       <source network='mk-test-preload-227745'/>
	I0919 23:20:14.529868   53006 main.go:141] libmachine: (test-preload-227745) DBG |       <model type='virtio'/>
	I0919 23:20:14.529883   53006 main.go:141] libmachine: (test-preload-227745) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I0919 23:20:14.529889   53006 main.go:141] libmachine: (test-preload-227745) DBG |     </interface>
	I0919 23:20:14.529899   53006 main.go:141] libmachine: (test-preload-227745) DBG |     <interface type='network'>
	I0919 23:20:14.529909   53006 main.go:141] libmachine: (test-preload-227745) DBG |       <mac address='52:54:00:2b:f3:ac'/>
	I0919 23:20:14.529923   53006 main.go:141] libmachine: (test-preload-227745) DBG |       <source network='default'/>
	I0919 23:20:14.529935   53006 main.go:141] libmachine: (test-preload-227745) DBG |       <model type='virtio'/>
	I0919 23:20:14.529952   53006 main.go:141] libmachine: (test-preload-227745) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I0919 23:20:14.529964   53006 main.go:141] libmachine: (test-preload-227745) DBG |     </interface>
	I0919 23:20:14.529973   53006 main.go:141] libmachine: (test-preload-227745) DBG |     <serial type='pty'>
	I0919 23:20:14.529979   53006 main.go:141] libmachine: (test-preload-227745) DBG |       <target type='isa-serial' port='0'>
	I0919 23:20:14.529990   53006 main.go:141] libmachine: (test-preload-227745) DBG |         <model name='isa-serial'/>
	I0919 23:20:14.530004   53006 main.go:141] libmachine: (test-preload-227745) DBG |       </target>
	I0919 23:20:14.530027   53006 main.go:141] libmachine: (test-preload-227745) DBG |     </serial>
	I0919 23:20:14.530043   53006 main.go:141] libmachine: (test-preload-227745) DBG |     <console type='pty'>
	I0919 23:20:14.530054   53006 main.go:141] libmachine: (test-preload-227745) DBG |       <target type='serial' port='0'/>
	I0919 23:20:14.530061   53006 main.go:141] libmachine: (test-preload-227745) DBG |     </console>
	I0919 23:20:14.530074   53006 main.go:141] libmachine: (test-preload-227745) DBG |     <input type='mouse' bus='ps2'/>
	I0919 23:20:14.530084   53006 main.go:141] libmachine: (test-preload-227745) DBG |     <input type='keyboard' bus='ps2'/>
	I0919 23:20:14.530116   53006 main.go:141] libmachine: (test-preload-227745) DBG |     <audio id='1' type='none'/>
	I0919 23:20:14.530127   53006 main.go:141] libmachine: (test-preload-227745) DBG |     <memballoon model='virtio'>
	I0919 23:20:14.530211   53006 main.go:141] libmachine: (test-preload-227745) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I0919 23:20:14.530243   53006 main.go:141] libmachine: (test-preload-227745) DBG |     </memballoon>
	I0919 23:20:14.530257   53006 main.go:141] libmachine: (test-preload-227745) DBG |     <rng model='virtio'>
	I0919 23:20:14.530273   53006 main.go:141] libmachine: (test-preload-227745) DBG |       <backend model='random'>/dev/random</backend>
	I0919 23:20:14.530299   53006 main.go:141] libmachine: (test-preload-227745) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I0919 23:20:14.530320   53006 main.go:141] libmachine: (test-preload-227745) DBG |     </rng>
	I0919 23:20:14.530333   53006 main.go:141] libmachine: (test-preload-227745) DBG |   </devices>
	I0919 23:20:14.530344   53006 main.go:141] libmachine: (test-preload-227745) DBG | </domain>
	I0919 23:20:14.530355   53006 main.go:141] libmachine: (test-preload-227745) DBG | 
	I0919 23:20:15.900752   53006 main.go:141] libmachine: (test-preload-227745) waiting for domain to start...
	I0919 23:20:15.901976   53006 main.go:141] libmachine: (test-preload-227745) domain is now running
	I0919 23:20:15.902010   53006 main.go:141] libmachine: (test-preload-227745) waiting for IP...
	I0919 23:20:15.902887   53006 main.go:141] libmachine: (test-preload-227745) DBG | domain test-preload-227745 has defined MAC address 52:54:00:41:4e:37 in network mk-test-preload-227745
	I0919 23:20:15.903474   53006 main.go:141] libmachine: (test-preload-227745) DBG | domain test-preload-227745 has current primary IP address 192.168.39.242 and MAC address 52:54:00:41:4e:37 in network mk-test-preload-227745
	I0919 23:20:15.903498   53006 main.go:141] libmachine: (test-preload-227745) found domain IP: 192.168.39.242
	I0919 23:20:15.903510   53006 main.go:141] libmachine: (test-preload-227745) reserving static IP address...
	I0919 23:20:15.903964   53006 main.go:141] libmachine: (test-preload-227745) DBG | found host DHCP lease matching {name: "test-preload-227745", mac: "52:54:00:41:4e:37", ip: "192.168.39.242"} in network mk-test-preload-227745: {Iface:virbr1 ExpiryTime:2025-09-20 00:18:45 +0000 UTC Type:0 Mac:52:54:00:41:4e:37 Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:test-preload-227745 Clientid:01:52:54:00:41:4e:37}
	I0919 23:20:15.904000   53006 main.go:141] libmachine: (test-preload-227745) DBG | skip adding static IP to network mk-test-preload-227745 - found existing host DHCP lease matching {name: "test-preload-227745", mac: "52:54:00:41:4e:37", ip: "192.168.39.242"}
	I0919 23:20:15.904018   53006 main.go:141] libmachine: (test-preload-227745) reserved static IP address 192.168.39.242 for domain test-preload-227745
	I0919 23:20:15.904036   53006 main.go:141] libmachine: (test-preload-227745) waiting for SSH...
	I0919 23:20:15.904050   53006 main.go:141] libmachine: (test-preload-227745) DBG | Getting to WaitForSSH function...
	I0919 23:20:15.906269   53006 main.go:141] libmachine: (test-preload-227745) DBG | domain test-preload-227745 has defined MAC address 52:54:00:41:4e:37 in network mk-test-preload-227745
	I0919 23:20:15.906638   53006 main.go:141] libmachine: (test-preload-227745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:4e:37", ip: ""} in network mk-test-preload-227745: {Iface:virbr1 ExpiryTime:2025-09-20 00:18:45 +0000 UTC Type:0 Mac:52:54:00:41:4e:37 Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:test-preload-227745 Clientid:01:52:54:00:41:4e:37}
	I0919 23:20:15.906668   53006 main.go:141] libmachine: (test-preload-227745) DBG | domain test-preload-227745 has defined IP address 192.168.39.242 and MAC address 52:54:00:41:4e:37 in network mk-test-preload-227745
	I0919 23:20:15.906825   53006 main.go:141] libmachine: (test-preload-227745) DBG | Using SSH client type: external
	I0919 23:20:15.906850   53006 main.go:141] libmachine: (test-preload-227745) DBG | Using SSH private key: /home/jenkins/minikube-integration/21594-14764/.minikube/machines/test-preload-227745/id_rsa (-rw-------)
	I0919 23:20:15.906910   53006 main.go:141] libmachine: (test-preload-227745) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.242 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21594-14764/.minikube/machines/test-preload-227745/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0919 23:20:15.906929   53006 main.go:141] libmachine: (test-preload-227745) DBG | About to run SSH command:
	I0919 23:20:15.906941   53006 main.go:141] libmachine: (test-preload-227745) DBG | exit 0
	I0919 23:20:27.194251   53006 main.go:141] libmachine: (test-preload-227745) DBG | SSH cmd err, output: exit status 255: 
	I0919 23:20:27.194284   53006 main.go:141] libmachine: (test-preload-227745) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0919 23:20:27.194301   53006 main.go:141] libmachine: (test-preload-227745) DBG | command : exit 0
	I0919 23:20:27.194319   53006 main.go:141] libmachine: (test-preload-227745) DBG | err     : exit status 255
	I0919 23:20:27.194335   53006 main.go:141] libmachine: (test-preload-227745) DBG | output  : 
	I0919 23:20:30.194810   53006 main.go:141] libmachine: (test-preload-227745) DBG | Getting to WaitForSSH function...
	I0919 23:20:30.197881   53006 main.go:141] libmachine: (test-preload-227745) DBG | domain test-preload-227745 has defined MAC address 52:54:00:41:4e:37 in network mk-test-preload-227745
	I0919 23:20:30.198330   53006 main.go:141] libmachine: (test-preload-227745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:4e:37", ip: ""} in network mk-test-preload-227745: {Iface:virbr1 ExpiryTime:2025-09-20 00:20:27 +0000 UTC Type:0 Mac:52:54:00:41:4e:37 Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:test-preload-227745 Clientid:01:52:54:00:41:4e:37}
	I0919 23:20:30.198363   53006 main.go:141] libmachine: (test-preload-227745) DBG | domain test-preload-227745 has defined IP address 192.168.39.242 and MAC address 52:54:00:41:4e:37 in network mk-test-preload-227745
	I0919 23:20:30.198478   53006 main.go:141] libmachine: (test-preload-227745) DBG | Using SSH client type: external
	I0919 23:20:30.198504   53006 main.go:141] libmachine: (test-preload-227745) DBG | Using SSH private key: /home/jenkins/minikube-integration/21594-14764/.minikube/machines/test-preload-227745/id_rsa (-rw-------)
	I0919 23:20:30.198531   53006 main.go:141] libmachine: (test-preload-227745) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.242 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21594-14764/.minikube/machines/test-preload-227745/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0919 23:20:30.198547   53006 main.go:141] libmachine: (test-preload-227745) DBG | About to run SSH command:
	I0919 23:20:30.198572   53006 main.go:141] libmachine: (test-preload-227745) DBG | exit 0
	I0919 23:20:30.332295   53006 main.go:141] libmachine: (test-preload-227745) DBG | SSH cmd err, output: <nil>: 
	I0919 23:20:30.332839   53006 main.go:141] libmachine: (test-preload-227745) Calling .GetConfigRaw
	I0919 23:20:30.333449   53006 main.go:141] libmachine: (test-preload-227745) Calling .GetIP
	I0919 23:20:30.336077   53006 main.go:141] libmachine: (test-preload-227745) DBG | domain test-preload-227745 has defined MAC address 52:54:00:41:4e:37 in network mk-test-preload-227745
	I0919 23:20:30.336451   53006 main.go:141] libmachine: (test-preload-227745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:4e:37", ip: ""} in network mk-test-preload-227745: {Iface:virbr1 ExpiryTime:2025-09-20 00:20:27 +0000 UTC Type:0 Mac:52:54:00:41:4e:37 Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:test-preload-227745 Clientid:01:52:54:00:41:4e:37}
	I0919 23:20:30.336481   53006 main.go:141] libmachine: (test-preload-227745) DBG | domain test-preload-227745 has defined IP address 192.168.39.242 and MAC address 52:54:00:41:4e:37 in network mk-test-preload-227745
	I0919 23:20:30.336807   53006 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/test-preload-227745/config.json ...
	I0919 23:20:30.337043   53006 machine.go:93] provisionDockerMachine start ...
	I0919 23:20:30.337063   53006 main.go:141] libmachine: (test-preload-227745) Calling .DriverName
	I0919 23:20:30.337283   53006 main.go:141] libmachine: (test-preload-227745) Calling .GetSSHHostname
	I0919 23:20:30.339610   53006 main.go:141] libmachine: (test-preload-227745) DBG | domain test-preload-227745 has defined MAC address 52:54:00:41:4e:37 in network mk-test-preload-227745
	I0919 23:20:30.340015   53006 main.go:141] libmachine: (test-preload-227745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:4e:37", ip: ""} in network mk-test-preload-227745: {Iface:virbr1 ExpiryTime:2025-09-20 00:20:27 +0000 UTC Type:0 Mac:52:54:00:41:4e:37 Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:test-preload-227745 Clientid:01:52:54:00:41:4e:37}
	I0919 23:20:30.340043   53006 main.go:141] libmachine: (test-preload-227745) DBG | domain test-preload-227745 has defined IP address 192.168.39.242 and MAC address 52:54:00:41:4e:37 in network mk-test-preload-227745
	I0919 23:20:30.340193   53006 main.go:141] libmachine: (test-preload-227745) Calling .GetSSHPort
	I0919 23:20:30.340378   53006 main.go:141] libmachine: (test-preload-227745) Calling .GetSSHKeyPath
	I0919 23:20:30.340537   53006 main.go:141] libmachine: (test-preload-227745) Calling .GetSSHKeyPath
	I0919 23:20:30.340709   53006 main.go:141] libmachine: (test-preload-227745) Calling .GetSSHUsername
	I0919 23:20:30.340891   53006 main.go:141] libmachine: Using SSH client type: native
	I0919 23:20:30.341154   53006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.242 22 <nil> <nil>}
	I0919 23:20:30.341166   53006 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 23:20:30.451841   53006 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0919 23:20:30.451877   53006 main.go:141] libmachine: (test-preload-227745) Calling .GetMachineName
	I0919 23:20:30.452132   53006 buildroot.go:166] provisioning hostname "test-preload-227745"
	I0919 23:20:30.452156   53006 main.go:141] libmachine: (test-preload-227745) Calling .GetMachineName
	I0919 23:20:30.452334   53006 main.go:141] libmachine: (test-preload-227745) Calling .GetSSHHostname
	I0919 23:20:30.455330   53006 main.go:141] libmachine: (test-preload-227745) DBG | domain test-preload-227745 has defined MAC address 52:54:00:41:4e:37 in network mk-test-preload-227745
	I0919 23:20:30.455699   53006 main.go:141] libmachine: (test-preload-227745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:4e:37", ip: ""} in network mk-test-preload-227745: {Iface:virbr1 ExpiryTime:2025-09-20 00:20:27 +0000 UTC Type:0 Mac:52:54:00:41:4e:37 Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:test-preload-227745 Clientid:01:52:54:00:41:4e:37}
	I0919 23:20:30.455719   53006 main.go:141] libmachine: (test-preload-227745) DBG | domain test-preload-227745 has defined IP address 192.168.39.242 and MAC address 52:54:00:41:4e:37 in network mk-test-preload-227745
	I0919 23:20:30.455879   53006 main.go:141] libmachine: (test-preload-227745) Calling .GetSSHPort
	I0919 23:20:30.456060   53006 main.go:141] libmachine: (test-preload-227745) Calling .GetSSHKeyPath
	I0919 23:20:30.456245   53006 main.go:141] libmachine: (test-preload-227745) Calling .GetSSHKeyPath
	I0919 23:20:30.456391   53006 main.go:141] libmachine: (test-preload-227745) Calling .GetSSHUsername
	I0919 23:20:30.456552   53006 main.go:141] libmachine: Using SSH client type: native
	I0919 23:20:30.456779   53006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.242 22 <nil> <nil>}
	I0919 23:20:30.456792   53006 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-227745 && echo "test-preload-227745" | sudo tee /etc/hostname
	I0919 23:20:30.585701   53006 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-227745
	
	I0919 23:20:30.585761   53006 main.go:141] libmachine: (test-preload-227745) Calling .GetSSHHostname
	I0919 23:20:30.588855   53006 main.go:141] libmachine: (test-preload-227745) DBG | domain test-preload-227745 has defined MAC address 52:54:00:41:4e:37 in network mk-test-preload-227745
	I0919 23:20:30.589247   53006 main.go:141] libmachine: (test-preload-227745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:4e:37", ip: ""} in network mk-test-preload-227745: {Iface:virbr1 ExpiryTime:2025-09-20 00:20:27 +0000 UTC Type:0 Mac:52:54:00:41:4e:37 Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:test-preload-227745 Clientid:01:52:54:00:41:4e:37}
	I0919 23:20:30.589278   53006 main.go:141] libmachine: (test-preload-227745) DBG | domain test-preload-227745 has defined IP address 192.168.39.242 and MAC address 52:54:00:41:4e:37 in network mk-test-preload-227745
	I0919 23:20:30.589504   53006 main.go:141] libmachine: (test-preload-227745) Calling .GetSSHPort
	I0919 23:20:30.589759   53006 main.go:141] libmachine: (test-preload-227745) Calling .GetSSHKeyPath
	I0919 23:20:30.589965   53006 main.go:141] libmachine: (test-preload-227745) Calling .GetSSHKeyPath
	I0919 23:20:30.590123   53006 main.go:141] libmachine: (test-preload-227745) Calling .GetSSHUsername
	I0919 23:20:30.590291   53006 main.go:141] libmachine: Using SSH client type: native
	I0919 23:20:30.590502   53006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.242 22 <nil> <nil>}
	I0919 23:20:30.590525   53006 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-227745' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-227745/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-227745' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 23:20:30.710839   53006 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 23:20:30.710874   53006 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14764/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14764/.minikube}
	I0919 23:20:30.710896   53006 buildroot.go:174] setting up certificates
	I0919 23:20:30.710906   53006 provision.go:84] configureAuth start
	I0919 23:20:30.710914   53006 main.go:141] libmachine: (test-preload-227745) Calling .GetMachineName
	I0919 23:20:30.711228   53006 main.go:141] libmachine: (test-preload-227745) Calling .GetIP
	I0919 23:20:30.714142   53006 main.go:141] libmachine: (test-preload-227745) DBG | domain test-preload-227745 has defined MAC address 52:54:00:41:4e:37 in network mk-test-preload-227745
	I0919 23:20:30.714643   53006 main.go:141] libmachine: (test-preload-227745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:4e:37", ip: ""} in network mk-test-preload-227745: {Iface:virbr1 ExpiryTime:2025-09-20 00:20:27 +0000 UTC Type:0 Mac:52:54:00:41:4e:37 Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:test-preload-227745 Clientid:01:52:54:00:41:4e:37}
	I0919 23:20:30.714675   53006 main.go:141] libmachine: (test-preload-227745) DBG | domain test-preload-227745 has defined IP address 192.168.39.242 and MAC address 52:54:00:41:4e:37 in network mk-test-preload-227745
	I0919 23:20:30.714913   53006 main.go:141] libmachine: (test-preload-227745) Calling .GetSSHHostname
	I0919 23:20:30.717393   53006 main.go:141] libmachine: (test-preload-227745) DBG | domain test-preload-227745 has defined MAC address 52:54:00:41:4e:37 in network mk-test-preload-227745
	I0919 23:20:30.717756   53006 main.go:141] libmachine: (test-preload-227745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:4e:37", ip: ""} in network mk-test-preload-227745: {Iface:virbr1 ExpiryTime:2025-09-20 00:20:27 +0000 UTC Type:0 Mac:52:54:00:41:4e:37 Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:test-preload-227745 Clientid:01:52:54:00:41:4e:37}
	I0919 23:20:30.717793   53006 main.go:141] libmachine: (test-preload-227745) DBG | domain test-preload-227745 has defined IP address 192.168.39.242 and MAC address 52:54:00:41:4e:37 in network mk-test-preload-227745
	I0919 23:20:30.718022   53006 provision.go:143] copyHostCerts
	I0919 23:20:30.718090   53006 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14764/.minikube/ca.pem, removing ...
	I0919 23:20:30.718101   53006 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14764/.minikube/ca.pem
	I0919 23:20:30.718168   53006 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14764/.minikube/ca.pem (1082 bytes)
	I0919 23:20:30.718306   53006 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14764/.minikube/cert.pem, removing ...
	I0919 23:20:30.718316   53006 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14764/.minikube/cert.pem
	I0919 23:20:30.718344   53006 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14764/.minikube/cert.pem (1123 bytes)
	I0919 23:20:30.718401   53006 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14764/.minikube/key.pem, removing ...
	I0919 23:20:30.718408   53006 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14764/.minikube/key.pem
	I0919 23:20:30.718432   53006 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14764/.minikube/key.pem (1679 bytes)
	I0919 23:20:30.718490   53006 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14764/.minikube/certs/ca-key.pem org=jenkins.test-preload-227745 san=[127.0.0.1 192.168.39.242 localhost minikube test-preload-227745]
	I0919 23:20:30.861068   53006 provision.go:177] copyRemoteCerts
	I0919 23:20:30.861126   53006 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 23:20:30.861148   53006 main.go:141] libmachine: (test-preload-227745) Calling .GetSSHHostname
	I0919 23:20:30.863949   53006 main.go:141] libmachine: (test-preload-227745) DBG | domain test-preload-227745 has defined MAC address 52:54:00:41:4e:37 in network mk-test-preload-227745
	I0919 23:20:30.864284   53006 main.go:141] libmachine: (test-preload-227745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:4e:37", ip: ""} in network mk-test-preload-227745: {Iface:virbr1 ExpiryTime:2025-09-20 00:20:27 +0000 UTC Type:0 Mac:52:54:00:41:4e:37 Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:test-preload-227745 Clientid:01:52:54:00:41:4e:37}
	I0919 23:20:30.864307   53006 main.go:141] libmachine: (test-preload-227745) DBG | domain test-preload-227745 has defined IP address 192.168.39.242 and MAC address 52:54:00:41:4e:37 in network mk-test-preload-227745
	I0919 23:20:30.864505   53006 main.go:141] libmachine: (test-preload-227745) Calling .GetSSHPort
	I0919 23:20:30.864715   53006 main.go:141] libmachine: (test-preload-227745) Calling .GetSSHKeyPath
	I0919 23:20:30.864917   53006 main.go:141] libmachine: (test-preload-227745) Calling .GetSSHUsername
	I0919 23:20:30.865076   53006 sshutil.go:53] new ssh client: &{IP:192.168.39.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21594-14764/.minikube/machines/test-preload-227745/id_rsa Username:docker}
	I0919 23:20:30.952660   53006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 23:20:30.985804   53006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14764/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0919 23:20:31.018742   53006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 23:20:31.051007   53006 provision.go:87] duration metric: took 340.086827ms to configureAuth
	I0919 23:20:31.051042   53006 buildroot.go:189] setting minikube options for container-runtime
	I0919 23:20:31.051230   53006 config.go:182] Loaded profile config "test-preload-227745": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0919 23:20:31.051333   53006 main.go:141] libmachine: (test-preload-227745) Calling .GetSSHHostname
	I0919 23:20:31.054314   53006 main.go:141] libmachine: (test-preload-227745) DBG | domain test-preload-227745 has defined MAC address 52:54:00:41:4e:37 in network mk-test-preload-227745
	I0919 23:20:31.054709   53006 main.go:141] libmachine: (test-preload-227745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:4e:37", ip: ""} in network mk-test-preload-227745: {Iface:virbr1 ExpiryTime:2025-09-20 00:20:27 +0000 UTC Type:0 Mac:52:54:00:41:4e:37 Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:test-preload-227745 Clientid:01:52:54:00:41:4e:37}
	I0919 23:20:31.054755   53006 main.go:141] libmachine: (test-preload-227745) DBG | domain test-preload-227745 has defined IP address 192.168.39.242 and MAC address 52:54:00:41:4e:37 in network mk-test-preload-227745
	I0919 23:20:31.054954   53006 main.go:141] libmachine: (test-preload-227745) Calling .GetSSHPort
	I0919 23:20:31.055154   53006 main.go:141] libmachine: (test-preload-227745) Calling .GetSSHKeyPath
	I0919 23:20:31.055296   53006 main.go:141] libmachine: (test-preload-227745) Calling .GetSSHKeyPath
	I0919 23:20:31.055421   53006 main.go:141] libmachine: (test-preload-227745) Calling .GetSSHUsername
	I0919 23:20:31.055554   53006 main.go:141] libmachine: Using SSH client type: native
	I0919 23:20:31.055792   53006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.242 22 <nil> <nil>}
	I0919 23:20:31.055816   53006 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 23:20:31.304342   53006 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 23:20:31.304375   53006 machine.go:96] duration metric: took 967.313829ms to provisionDockerMachine
	I0919 23:20:31.304387   53006 start.go:293] postStartSetup for "test-preload-227745" (driver="kvm2")
	I0919 23:20:31.304401   53006 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 23:20:31.304421   53006 main.go:141] libmachine: (test-preload-227745) Calling .DriverName
	I0919 23:20:31.304793   53006 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 23:20:31.304830   53006 main.go:141] libmachine: (test-preload-227745) Calling .GetSSHHostname
	I0919 23:20:31.307635   53006 main.go:141] libmachine: (test-preload-227745) DBG | domain test-preload-227745 has defined MAC address 52:54:00:41:4e:37 in network mk-test-preload-227745
	I0919 23:20:31.307995   53006 main.go:141] libmachine: (test-preload-227745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:4e:37", ip: ""} in network mk-test-preload-227745: {Iface:virbr1 ExpiryTime:2025-09-20 00:20:27 +0000 UTC Type:0 Mac:52:54:00:41:4e:37 Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:test-preload-227745 Clientid:01:52:54:00:41:4e:37}
	I0919 23:20:31.308022   53006 main.go:141] libmachine: (test-preload-227745) DBG | domain test-preload-227745 has defined IP address 192.168.39.242 and MAC address 52:54:00:41:4e:37 in network mk-test-preload-227745
	I0919 23:20:31.308183   53006 main.go:141] libmachine: (test-preload-227745) Calling .GetSSHPort
	I0919 23:20:31.308448   53006 main.go:141] libmachine: (test-preload-227745) Calling .GetSSHKeyPath
	I0919 23:20:31.308624   53006 main.go:141] libmachine: (test-preload-227745) Calling .GetSSHUsername
	I0919 23:20:31.308773   53006 sshutil.go:53] new ssh client: &{IP:192.168.39.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21594-14764/.minikube/machines/test-preload-227745/id_rsa Username:docker}
	I0919 23:20:31.395415   53006 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 23:20:31.400693   53006 info.go:137] Remote host: Buildroot 2025.02
	I0919 23:20:31.400754   53006 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14764/.minikube/addons for local assets ...
	I0919 23:20:31.400865   53006 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14764/.minikube/files for local assets ...
	I0919 23:20:31.400947   53006 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14764/.minikube/files/etc/ssl/certs/186712.pem -> 186712.pem in /etc/ssl/certs
	I0919 23:20:31.401038   53006 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 23:20:31.413942   53006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14764/.minikube/files/etc/ssl/certs/186712.pem --> /etc/ssl/certs/186712.pem (1708 bytes)
	I0919 23:20:31.446900   53006 start.go:296] duration metric: took 142.499814ms for postStartSetup
	I0919 23:20:31.446951   53006 fix.go:56] duration metric: took 16.940121292s for fixHost
	I0919 23:20:31.446977   53006 main.go:141] libmachine: (test-preload-227745) Calling .GetSSHHostname
	I0919 23:20:31.449987   53006 main.go:141] libmachine: (test-preload-227745) DBG | domain test-preload-227745 has defined MAC address 52:54:00:41:4e:37 in network mk-test-preload-227745
	I0919 23:20:31.450286   53006 main.go:141] libmachine: (test-preload-227745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:4e:37", ip: ""} in network mk-test-preload-227745: {Iface:virbr1 ExpiryTime:2025-09-20 00:20:27 +0000 UTC Type:0 Mac:52:54:00:41:4e:37 Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:test-preload-227745 Clientid:01:52:54:00:41:4e:37}
	I0919 23:20:31.450322   53006 main.go:141] libmachine: (test-preload-227745) DBG | domain test-preload-227745 has defined IP address 192.168.39.242 and MAC address 52:54:00:41:4e:37 in network mk-test-preload-227745
	I0919 23:20:31.450530   53006 main.go:141] libmachine: (test-preload-227745) Calling .GetSSHPort
	I0919 23:20:31.450749   53006 main.go:141] libmachine: (test-preload-227745) Calling .GetSSHKeyPath
	I0919 23:20:31.450907   53006 main.go:141] libmachine: (test-preload-227745) Calling .GetSSHKeyPath
	I0919 23:20:31.451057   53006 main.go:141] libmachine: (test-preload-227745) Calling .GetSSHUsername
	I0919 23:20:31.451220   53006 main.go:141] libmachine: Using SSH client type: native
	I0919 23:20:31.451418   53006 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.39.242 22 <nil> <nil>}
	I0919 23:20:31.451430   53006 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0919 23:20:31.561417   53006 main.go:141] libmachine: SSH cmd err, output: <nil>: 1758324031.523960180
	
	I0919 23:20:31.561439   53006 fix.go:216] guest clock: 1758324031.523960180
	I0919 23:20:31.561446   53006 fix.go:229] Guest: 2025-09-19 23:20:31.52396018 +0000 UTC Remote: 2025-09-19 23:20:31.446957836 +0000 UTC m=+20.500719712 (delta=77.002344ms)
	I0919 23:20:31.561467   53006 fix.go:200] guest clock delta is within tolerance: 77.002344ms
	I0919 23:20:31.561472   53006 start.go:83] releasing machines lock for "test-preload-227745", held for 17.054656508s
	I0919 23:20:31.561501   53006 main.go:141] libmachine: (test-preload-227745) Calling .DriverName
	I0919 23:20:31.561803   53006 main.go:141] libmachine: (test-preload-227745) Calling .GetIP
	I0919 23:20:31.564830   53006 main.go:141] libmachine: (test-preload-227745) DBG | domain test-preload-227745 has defined MAC address 52:54:00:41:4e:37 in network mk-test-preload-227745
	I0919 23:20:31.565192   53006 main.go:141] libmachine: (test-preload-227745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:4e:37", ip: ""} in network mk-test-preload-227745: {Iface:virbr1 ExpiryTime:2025-09-20 00:20:27 +0000 UTC Type:0 Mac:52:54:00:41:4e:37 Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:test-preload-227745 Clientid:01:52:54:00:41:4e:37}
	I0919 23:20:31.565223   53006 main.go:141] libmachine: (test-preload-227745) DBG | domain test-preload-227745 has defined IP address 192.168.39.242 and MAC address 52:54:00:41:4e:37 in network mk-test-preload-227745
	I0919 23:20:31.565402   53006 main.go:141] libmachine: (test-preload-227745) Calling .DriverName
	I0919 23:20:31.566059   53006 main.go:141] libmachine: (test-preload-227745) Calling .DriverName
	I0919 23:20:31.566243   53006 main.go:141] libmachine: (test-preload-227745) Calling .DriverName
	I0919 23:20:31.566357   53006 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 23:20:31.566408   53006 main.go:141] libmachine: (test-preload-227745) Calling .GetSSHHostname
	I0919 23:20:31.566440   53006 ssh_runner.go:195] Run: cat /version.json
	I0919 23:20:31.566465   53006 main.go:141] libmachine: (test-preload-227745) Calling .GetSSHHostname
	I0919 23:20:31.569624   53006 main.go:141] libmachine: (test-preload-227745) DBG | domain test-preload-227745 has defined MAC address 52:54:00:41:4e:37 in network mk-test-preload-227745
	I0919 23:20:31.569874   53006 main.go:141] libmachine: (test-preload-227745) DBG | domain test-preload-227745 has defined MAC address 52:54:00:41:4e:37 in network mk-test-preload-227745
	I0919 23:20:31.570099   53006 main.go:141] libmachine: (test-preload-227745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:4e:37", ip: ""} in network mk-test-preload-227745: {Iface:virbr1 ExpiryTime:2025-09-20 00:20:27 +0000 UTC Type:0 Mac:52:54:00:41:4e:37 Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:test-preload-227745 Clientid:01:52:54:00:41:4e:37}
	I0919 23:20:31.570120   53006 main.go:141] libmachine: (test-preload-227745) DBG | domain test-preload-227745 has defined IP address 192.168.39.242 and MAC address 52:54:00:41:4e:37 in network mk-test-preload-227745
	I0919 23:20:31.570325   53006 main.go:141] libmachine: (test-preload-227745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:4e:37", ip: ""} in network mk-test-preload-227745: {Iface:virbr1 ExpiryTime:2025-09-20 00:20:27 +0000 UTC Type:0 Mac:52:54:00:41:4e:37 Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:test-preload-227745 Clientid:01:52:54:00:41:4e:37}
	I0919 23:20:31.570341   53006 main.go:141] libmachine: (test-preload-227745) Calling .GetSSHPort
	I0919 23:20:31.570350   53006 main.go:141] libmachine: (test-preload-227745) DBG | domain test-preload-227745 has defined IP address 192.168.39.242 and MAC address 52:54:00:41:4e:37 in network mk-test-preload-227745
	I0919 23:20:31.570529   53006 main.go:141] libmachine: (test-preload-227745) Calling .GetSSHPort
	I0919 23:20:31.570552   53006 main.go:141] libmachine: (test-preload-227745) Calling .GetSSHKeyPath
	I0919 23:20:31.570782   53006 main.go:141] libmachine: (test-preload-227745) Calling .GetSSHKeyPath
	I0919 23:20:31.570787   53006 main.go:141] libmachine: (test-preload-227745) Calling .GetSSHUsername
	I0919 23:20:31.570998   53006 main.go:141] libmachine: (test-preload-227745) Calling .GetSSHUsername
	I0919 23:20:31.571016   53006 sshutil.go:53] new ssh client: &{IP:192.168.39.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21594-14764/.minikube/machines/test-preload-227745/id_rsa Username:docker}
	I0919 23:20:31.571111   53006 sshutil.go:53] new ssh client: &{IP:192.168.39.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21594-14764/.minikube/machines/test-preload-227745/id_rsa Username:docker}
	I0919 23:20:31.678739   53006 ssh_runner.go:195] Run: systemctl --version
	I0919 23:20:31.686024   53006 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 23:20:31.831809   53006 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0919 23:20:31.839415   53006 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0919 23:20:31.839496   53006 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 23:20:31.861857   53006 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0919 23:20:31.861887   53006 start.go:495] detecting cgroup driver to use...
	I0919 23:20:31.861946   53006 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 23:20:31.883244   53006 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 23:20:31.901685   53006 docker.go:218] disabling cri-docker service (if available) ...
	I0919 23:20:31.901760   53006 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 23:20:31.920333   53006 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 23:20:31.938713   53006 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 23:20:32.092681   53006 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 23:20:32.320563   53006 docker.go:234] disabling docker service ...
	I0919 23:20:32.320655   53006 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 23:20:32.337872   53006 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 23:20:32.354665   53006 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 23:20:32.523413   53006 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 23:20:32.673447   53006 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 23:20:32.690816   53006 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 23:20:32.715566   53006 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0919 23:20:32.715637   53006 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 23:20:32.729352   53006 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0919 23:20:32.729418   53006 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 23:20:32.743091   53006 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 23:20:32.756975   53006 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 23:20:32.773822   53006 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 23:20:32.792379   53006 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 23:20:32.806121   53006 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 23:20:32.830228   53006 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 23:20:32.844124   53006 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 23:20:32.855921   53006 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0919 23:20:32.855995   53006 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0919 23:20:32.878774   53006 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 23:20:32.892039   53006 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:20:33.044701   53006 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 23:20:33.171218   53006 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 23:20:33.171296   53006 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 23:20:33.177068   53006 start.go:563] Will wait 60s for crictl version
	I0919 23:20:33.177146   53006 ssh_runner.go:195] Run: which crictl
	I0919 23:20:33.182042   53006 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 23:20:33.227920   53006 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0919 23:20:33.228002   53006 ssh_runner.go:195] Run: crio --version
	I0919 23:20:33.260784   53006 ssh_runner.go:195] Run: crio --version
	I0919 23:20:33.298792   53006 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I0919 23:20:33.300122   53006 main.go:141] libmachine: (test-preload-227745) Calling .GetIP
	I0919 23:20:33.303025   53006 main.go:141] libmachine: (test-preload-227745) DBG | domain test-preload-227745 has defined MAC address 52:54:00:41:4e:37 in network mk-test-preload-227745
	I0919 23:20:33.303395   53006 main.go:141] libmachine: (test-preload-227745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:4e:37", ip: ""} in network mk-test-preload-227745: {Iface:virbr1 ExpiryTime:2025-09-20 00:20:27 +0000 UTC Type:0 Mac:52:54:00:41:4e:37 Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:test-preload-227745 Clientid:01:52:54:00:41:4e:37}
	I0919 23:20:33.303440   53006 main.go:141] libmachine: (test-preload-227745) DBG | domain test-preload-227745 has defined IP address 192.168.39.242 and MAC address 52:54:00:41:4e:37 in network mk-test-preload-227745
	I0919 23:20:33.303685   53006 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0919 23:20:33.308632   53006 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 23:20:33.325481   53006 kubeadm.go:875] updating cluster {Name:test-preload-227745 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.37.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-227745 Namespace:defa
ult APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.242 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 23:20:33.325630   53006 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0919 23:20:33.325679   53006 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 23:20:33.371129   53006 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I0919 23:20:33.371225   53006 ssh_runner.go:195] Run: which lz4
	I0919 23:20:33.375810   53006 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0919 23:20:33.381000   53006 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0919 23:20:33.381033   53006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I0919 23:20:35.036402   53006 crio.go:462] duration metric: took 1.660625153s to copy over tarball
	I0919 23:20:35.036479   53006 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0919 23:20:36.791395   53006 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.754885882s)
	I0919 23:20:36.791426   53006 crio.go:469] duration metric: took 1.754991502s to extract the tarball
	I0919 23:20:36.791437   53006 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0919 23:20:36.833908   53006 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 23:20:36.880800   53006 crio.go:514] all images are preloaded for cri-o runtime.
	I0919 23:20:36.880823   53006 cache_images.go:85] Images are preloaded, skipping loading
	I0919 23:20:36.880829   53006 kubeadm.go:926] updating node { 192.168.39.242 8443 v1.32.0 crio true true} ...
	I0919 23:20:36.880931   53006 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-227745 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.242
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:test-preload-227745 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 23:20:36.881000   53006 ssh_runner.go:195] Run: crio config
	I0919 23:20:36.930361   53006 cni.go:84] Creating CNI manager for ""
	I0919 23:20:36.930384   53006 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0919 23:20:36.930395   53006 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 23:20:36.930422   53006 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.242 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-227745 NodeName:test-preload-227745 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.242"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.242 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 23:20:36.930522   53006 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.242
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-227745"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.242"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.242"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 23:20:36.930599   53006 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I0919 23:20:36.943347   53006 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 23:20:36.943431   53006 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0919 23:20:36.956076   53006 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I0919 23:20:36.978962   53006 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 23:20:37.001496   53006 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I0919 23:20:37.024524   53006 ssh_runner.go:195] Run: grep 192.168.39.242	control-plane.minikube.internal$ /etc/hosts
	I0919 23:20:37.029139   53006 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.242	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 23:20:37.044998   53006 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:20:37.199176   53006 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 23:20:37.221023   53006 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/test-preload-227745 for IP: 192.168.39.242
	I0919 23:20:37.221051   53006 certs.go:194] generating shared ca certs ...
	I0919 23:20:37.221072   53006 certs.go:226] acquiring lock for ca certs: {Name:mk1fe71ea89348ba0bd576e99c774a344fba186e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:20:37.221249   53006 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14764/.minikube/ca.key
	I0919 23:20:37.221300   53006 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14764/.minikube/proxy-client-ca.key
	I0919 23:20:37.221314   53006 certs.go:256] generating profile certs ...
	I0919 23:20:37.221425   53006 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/test-preload-227745/client.key
	I0919 23:20:37.221508   53006 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/test-preload-227745/apiserver.key.399a9c1f
	I0919 23:20:37.221565   53006 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/test-preload-227745/proxy-client.key
	I0919 23:20:37.221711   53006 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14764/.minikube/certs/18671.pem (1338 bytes)
	W0919 23:20:37.221783   53006 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14764/.minikube/certs/18671_empty.pem, impossibly tiny 0 bytes
	I0919 23:20:37.221800   53006 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14764/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 23:20:37.221848   53006 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14764/.minikube/certs/ca.pem (1082 bytes)
	I0919 23:20:37.221880   53006 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14764/.minikube/certs/cert.pem (1123 bytes)
	I0919 23:20:37.221909   53006 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14764/.minikube/certs/key.pem (1679 bytes)
	I0919 23:20:37.221980   53006 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14764/.minikube/files/etc/ssl/certs/186712.pem (1708 bytes)
	I0919 23:20:37.222745   53006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 23:20:37.270759   53006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0919 23:20:37.311427   53006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 23:20:37.344661   53006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 23:20:37.376945   53006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/test-preload-227745/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0919 23:20:37.409712   53006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/test-preload-227745/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 23:20:37.442498   53006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/test-preload-227745/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 23:20:37.474658   53006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/test-preload-227745/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0919 23:20:37.509524   53006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14764/.minikube/files/etc/ssl/certs/186712.pem --> /usr/share/ca-certificates/186712.pem (1708 bytes)
	I0919 23:20:37.541848   53006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 23:20:37.574479   53006 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14764/.minikube/certs/18671.pem --> /usr/share/ca-certificates/18671.pem (1338 bytes)
	I0919 23:20:37.607370   53006 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 23:20:37.630839   53006 ssh_runner.go:195] Run: openssl version
	I0919 23:20:37.638231   53006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 23:20:37.652937   53006 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:20:37.659649   53006 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:14 /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:20:37.659705   53006 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:20:37.667608   53006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 23:20:37.682011   53006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18671.pem && ln -fs /usr/share/ca-certificates/18671.pem /etc/ssl/certs/18671.pem"
	I0919 23:20:37.696620   53006 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18671.pem
	I0919 23:20:37.702568   53006 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:22 /usr/share/ca-certificates/18671.pem
	I0919 23:20:37.702640   53006 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18671.pem
	I0919 23:20:37.710675   53006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18671.pem /etc/ssl/certs/51391683.0"
	I0919 23:20:37.725422   53006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/186712.pem && ln -fs /usr/share/ca-certificates/186712.pem /etc/ssl/certs/186712.pem"
	I0919 23:20:37.739999   53006 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/186712.pem
	I0919 23:20:37.746155   53006 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:22 /usr/share/ca-certificates/186712.pem
	I0919 23:20:37.746222   53006 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/186712.pem
	I0919 23:20:37.754121   53006 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/186712.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 23:20:37.768455   53006 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 23:20:37.774925   53006 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 23:20:37.783280   53006 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 23:20:37.791593   53006 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 23:20:37.800188   53006 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 23:20:37.808401   53006 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 23:20:37.816697   53006 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 23:20:37.824839   53006 kubeadm.go:392] StartCluster: {Name:test-preload-227745 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.37.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-227745 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.242 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 23:20:37.824914   53006 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0919 23:20:37.824960   53006 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 23:20:37.869741   53006 cri.go:89] found id: ""
	I0919 23:20:37.869813   53006 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 23:20:37.883173   53006 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0919 23:20:37.883199   53006 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0919 23:20:37.883249   53006 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0919 23:20:37.896268   53006 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0919 23:20:37.896716   53006 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-227745" does not appear in /home/jenkins/minikube-integration/21594-14764/kubeconfig
	I0919 23:20:37.896897   53006 kubeconfig.go:62] /home/jenkins/minikube-integration/21594-14764/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-227745" cluster setting kubeconfig missing "test-preload-227745" context setting]
	I0919 23:20:37.897165   53006 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14764/kubeconfig: {Name:mk29db95201211dec339ee278b6433541126d194 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:20:37.897650   53006 kapi.go:59] client config for test-preload-227745: &rest.Config{Host:"https://192.168.39.242:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-14764/.minikube/profiles/test-preload-227745/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-14764/.minikube/profiles/test-preload-227745/client.key", CAFile:"/home/jenkins/minikube-integration/21594-14764/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(
nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 23:20:37.898089   53006 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0919 23:20:37.898104   53006 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0919 23:20:37.898108   53006 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0919 23:20:37.898112   53006 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0919 23:20:37.898115   53006 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0919 23:20:37.898383   53006 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0919 23:20:37.910609   53006 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.39.242
	I0919 23:20:37.910651   53006 kubeadm.go:1152] stopping kube-system containers ...
	I0919 23:20:37.910662   53006 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0919 23:20:37.910715   53006 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 23:20:37.964587   53006 cri.go:89] found id: ""
	I0919 23:20:37.964677   53006 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0919 23:20:37.989713   53006 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 23:20:38.002612   53006 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 23:20:38.002634   53006 kubeadm.go:157] found existing configuration files:
	
	I0919 23:20:38.002685   53006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0919 23:20:38.014712   53006 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0919 23:20:38.014789   53006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0919 23:20:38.027342   53006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0919 23:20:38.040279   53006 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0919 23:20:38.040348   53006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0919 23:20:38.054040   53006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0919 23:20:38.066086   53006 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0919 23:20:38.066153   53006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0919 23:20:38.078871   53006 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0919 23:20:38.090891   53006 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0919 23:20:38.090942   53006 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0919 23:20:38.103330   53006 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 23:20:38.117126   53006 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0919 23:20:38.178896   53006 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0919 23:20:39.407655   53006 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.228711783s)
	I0919 23:20:39.407706   53006 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0919 23:20:39.672314   53006 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0919 23:20:39.741329   53006 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0919 23:20:39.828702   53006 api_server.go:52] waiting for apiserver process to appear ...
	I0919 23:20:39.828808   53006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 23:20:40.329688   53006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 23:20:40.829274   53006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 23:20:41.329865   53006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 23:20:41.352883   53006 api_server.go:72] duration metric: took 1.524194115s to wait for apiserver process to appear ...
	I0919 23:20:41.352906   53006 api_server.go:88] waiting for apiserver healthz status ...
	I0919 23:20:41.352926   53006 api_server.go:253] Checking apiserver healthz at https://192.168.39.242:8443/healthz ...
	I0919 23:20:44.365734   53006 api_server.go:279] https://192.168.39.242:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0919 23:20:44.365763   53006 api_server.go:103] status: https://192.168.39.242:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0919 23:20:44.365780   53006 api_server.go:253] Checking apiserver healthz at https://192.168.39.242:8443/healthz ...
	I0919 23:20:44.385363   53006 api_server.go:279] https://192.168.39.242:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0919 23:20:44.385388   53006 api_server.go:103] status: https://192.168.39.242:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0919 23:20:44.853547   53006 api_server.go:253] Checking apiserver healthz at https://192.168.39.242:8443/healthz ...
	I0919 23:20:44.859076   53006 api_server.go:279] https://192.168.39.242:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 23:20:44.859111   53006 api_server.go:103] status: https://192.168.39.242:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 23:20:45.353749   53006 api_server.go:253] Checking apiserver healthz at https://192.168.39.242:8443/healthz ...
	I0919 23:20:45.362582   53006 api_server.go:279] https://192.168.39.242:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 23:20:45.362609   53006 api_server.go:103] status: https://192.168.39.242:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 23:20:45.853231   53006 api_server.go:253] Checking apiserver healthz at https://192.168.39.242:8443/healthz ...
	I0919 23:20:45.858039   53006 api_server.go:279] https://192.168.39.242:8443/healthz returned 200:
	ok
	I0919 23:20:45.864718   53006 api_server.go:141] control plane version: v1.32.0
	I0919 23:20:45.864756   53006 api_server.go:131] duration metric: took 4.51184459s to wait for apiserver health ...
	I0919 23:20:45.864777   53006 cni.go:84] Creating CNI manager for ""
	I0919 23:20:45.864783   53006 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0919 23:20:45.866835   53006 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I0919 23:20:45.868344   53006 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0919 23:20:45.883294   53006 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0919 23:20:45.908476   53006 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 23:20:45.915471   53006 system_pods.go:59] 7 kube-system pods found
	I0919 23:20:45.915522   53006 system_pods.go:61] "coredns-668d6bf9bc-247xs" [5d19b207-7860-495b-8517-32d3ed0aeeba] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:20:45.915531   53006 system_pods.go:61] "etcd-test-preload-227745" [2ef94cd9-7898-45fc-9ae0-e8c7aaa332ca] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 23:20:45.915546   53006 system_pods.go:61] "kube-apiserver-test-preload-227745" [ca6ef810-ba94-438b-a08b-4b3090fd2277] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 23:20:45.915579   53006 system_pods.go:61] "kube-controller-manager-test-preload-227745" [06d634c3-c8da-4825-9014-19b6b5ec15bd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 23:20:45.915588   53006 system_pods.go:61] "kube-proxy-p9lk5" [259605dc-9f56-4b61-8f89-560bab0084ef] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:20:45.915593   53006 system_pods.go:61] "kube-scheduler-test-preload-227745" [c455729a-7726-4425-9bf6-ab923f148e51] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 23:20:45.915598   53006 system_pods.go:61] "storage-provisioner" [be6f118f-b798-4baf-b817-60b6262605fb] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 23:20:45.915605   53006 system_pods.go:74] duration metric: took 7.108995ms to wait for pod list to return data ...
	I0919 23:20:45.915614   53006 node_conditions.go:102] verifying NodePressure condition ...
	I0919 23:20:45.920736   53006 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0919 23:20:45.920766   53006 node_conditions.go:123] node cpu capacity is 2
	I0919 23:20:45.920777   53006 node_conditions.go:105] duration metric: took 5.158779ms to run NodePressure ...
	I0919 23:20:45.920803   53006 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0919 23:20:46.205889   53006 kubeadm.go:720] waiting for restarted kubelet to initialise ...
	I0919 23:20:46.210148   53006 kubeadm.go:735] kubelet initialised
	I0919 23:20:46.210171   53006 kubeadm.go:736] duration metric: took 4.259422ms waiting for restarted kubelet to initialise ...
	I0919 23:20:46.210190   53006 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 23:20:46.226612   53006 ops.go:34] apiserver oom_adj: -16
	I0919 23:20:46.226637   53006 kubeadm.go:593] duration metric: took 8.343432072s to restartPrimaryControlPlane
	I0919 23:20:46.226647   53006 kubeadm.go:394] duration metric: took 8.401816397s to StartCluster
	I0919 23:20:46.226674   53006 settings.go:142] acquiring lock: {Name:mk9e6bfe60e4d22990b0b362d40b65315947b582 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:20:46.226789   53006 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21594-14764/kubeconfig
	I0919 23:20:46.227467   53006 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14764/kubeconfig: {Name:mk29db95201211dec339ee278b6433541126d194 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:20:46.227764   53006 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.242 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 23:20:46.227842   53006 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0919 23:20:46.227934   53006 config.go:182] Loaded profile config "test-preload-227745": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0919 23:20:46.227932   53006 addons.go:69] Setting storage-provisioner=true in profile "test-preload-227745"
	I0919 23:20:46.227966   53006 addons.go:69] Setting default-storageclass=true in profile "test-preload-227745"
	I0919 23:20:46.228006   53006 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-227745"
	I0919 23:20:46.228017   53006 addons.go:238] Setting addon storage-provisioner=true in "test-preload-227745"
	W0919 23:20:46.228029   53006 addons.go:247] addon storage-provisioner should already be in state true
	I0919 23:20:46.228066   53006 host.go:66] Checking if "test-preload-227745" exists ...
	I0919 23:20:46.228390   53006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 23:20:46.228395   53006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 23:20:46.228432   53006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 23:20:46.228439   53006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 23:20:46.231431   53006 out.go:179] * Verifying Kubernetes components...
	I0919 23:20:46.232835   53006 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:20:46.242138   53006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38017
	I0919 23:20:46.242659   53006 main.go:141] libmachine: () Calling .GetVersion
	I0919 23:20:46.243281   53006 main.go:141] libmachine: Using API Version  1
	I0919 23:20:46.243316   53006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 23:20:46.243719   53006 main.go:141] libmachine: () Calling .GetMachineName
	I0919 23:20:46.244348   53006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 23:20:46.244400   53006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 23:20:46.245161   53006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36413
	I0919 23:20:46.245635   53006 main.go:141] libmachine: () Calling .GetVersion
	I0919 23:20:46.246171   53006 main.go:141] libmachine: Using API Version  1
	I0919 23:20:46.246195   53006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 23:20:46.246524   53006 main.go:141] libmachine: () Calling .GetMachineName
	I0919 23:20:46.246700   53006 main.go:141] libmachine: (test-preload-227745) Calling .GetState
	I0919 23:20:46.249284   53006 kapi.go:59] client config for test-preload-227745: &rest.Config{Host:"https://192.168.39.242:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-14764/.minikube/profiles/test-preload-227745/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-14764/.minikube/profiles/test-preload-227745/client.key", CAFile:"/home/jenkins/minikube-integration/21594-14764/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(
nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 23:20:46.249794   53006 addons.go:238] Setting addon default-storageclass=true in "test-preload-227745"
	W0919 23:20:46.249819   53006 addons.go:247] addon default-storageclass should already be in state true
	I0919 23:20:46.249846   53006 host.go:66] Checking if "test-preload-227745" exists ...
	I0919 23:20:46.250215   53006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 23:20:46.250262   53006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 23:20:46.260637   53006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44415
	I0919 23:20:46.261099   53006 main.go:141] libmachine: () Calling .GetVersion
	I0919 23:20:46.261674   53006 main.go:141] libmachine: Using API Version  1
	I0919 23:20:46.261700   53006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 23:20:46.262142   53006 main.go:141] libmachine: () Calling .GetMachineName
	I0919 23:20:46.262344   53006 main.go:141] libmachine: (test-preload-227745) Calling .GetState
	I0919 23:20:46.264260   53006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44847
	I0919 23:20:46.264562   53006 main.go:141] libmachine: (test-preload-227745) Calling .DriverName
	I0919 23:20:46.264647   53006 main.go:141] libmachine: () Calling .GetVersion
	I0919 23:20:46.265068   53006 main.go:141] libmachine: Using API Version  1
	I0919 23:20:46.265088   53006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 23:20:46.265437   53006 main.go:141] libmachine: () Calling .GetMachineName
	I0919 23:20:46.265909   53006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 23:20:46.265946   53006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 23:20:46.266959   53006 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 23:20:46.271366   53006 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 23:20:46.271394   53006 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 23:20:46.271419   53006 main.go:141] libmachine: (test-preload-227745) Calling .GetSSHHostname
	I0919 23:20:46.275249   53006 main.go:141] libmachine: (test-preload-227745) DBG | domain test-preload-227745 has defined MAC address 52:54:00:41:4e:37 in network mk-test-preload-227745
	I0919 23:20:46.275764   53006 main.go:141] libmachine: (test-preload-227745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:4e:37", ip: ""} in network mk-test-preload-227745: {Iface:virbr1 ExpiryTime:2025-09-20 00:20:27 +0000 UTC Type:0 Mac:52:54:00:41:4e:37 Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:test-preload-227745 Clientid:01:52:54:00:41:4e:37}
	I0919 23:20:46.275814   53006 main.go:141] libmachine: (test-preload-227745) DBG | domain test-preload-227745 has defined IP address 192.168.39.242 and MAC address 52:54:00:41:4e:37 in network mk-test-preload-227745
	I0919 23:20:46.275948   53006 main.go:141] libmachine: (test-preload-227745) Calling .GetSSHPort
	I0919 23:20:46.276184   53006 main.go:141] libmachine: (test-preload-227745) Calling .GetSSHKeyPath
	I0919 23:20:46.276384   53006 main.go:141] libmachine: (test-preload-227745) Calling .GetSSHUsername
	I0919 23:20:46.276553   53006 sshutil.go:53] new ssh client: &{IP:192.168.39.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21594-14764/.minikube/machines/test-preload-227745/id_rsa Username:docker}
	I0919 23:20:46.281202   53006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32999
	I0919 23:20:46.281661   53006 main.go:141] libmachine: () Calling .GetVersion
	I0919 23:20:46.282148   53006 main.go:141] libmachine: Using API Version  1
	I0919 23:20:46.282177   53006 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 23:20:46.282567   53006 main.go:141] libmachine: () Calling .GetMachineName
	I0919 23:20:46.282774   53006 main.go:141] libmachine: (test-preload-227745) Calling .GetState
	I0919 23:20:46.284763   53006 main.go:141] libmachine: (test-preload-227745) Calling .DriverName
	I0919 23:20:46.284993   53006 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 23:20:46.285009   53006 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 23:20:46.285027   53006 main.go:141] libmachine: (test-preload-227745) Calling .GetSSHHostname
	I0919 23:20:46.288598   53006 main.go:141] libmachine: (test-preload-227745) DBG | domain test-preload-227745 has defined MAC address 52:54:00:41:4e:37 in network mk-test-preload-227745
	I0919 23:20:46.289089   53006 main.go:141] libmachine: (test-preload-227745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:4e:37", ip: ""} in network mk-test-preload-227745: {Iface:virbr1 ExpiryTime:2025-09-20 00:20:27 +0000 UTC Type:0 Mac:52:54:00:41:4e:37 Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:test-preload-227745 Clientid:01:52:54:00:41:4e:37}
	I0919 23:20:46.289118   53006 main.go:141] libmachine: (test-preload-227745) DBG | domain test-preload-227745 has defined IP address 192.168.39.242 and MAC address 52:54:00:41:4e:37 in network mk-test-preload-227745
	I0919 23:20:46.289301   53006 main.go:141] libmachine: (test-preload-227745) Calling .GetSSHPort
	I0919 23:20:46.289479   53006 main.go:141] libmachine: (test-preload-227745) Calling .GetSSHKeyPath
	I0919 23:20:46.289780   53006 main.go:141] libmachine: (test-preload-227745) Calling .GetSSHUsername
	I0919 23:20:46.289930   53006 sshutil.go:53] new ssh client: &{IP:192.168.39.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21594-14764/.minikube/machines/test-preload-227745/id_rsa Username:docker}
	I0919 23:20:46.460024   53006 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 23:20:46.497397   53006 node_ready.go:35] waiting up to 6m0s for node "test-preload-227745" to be "Ready" ...
	I0919 23:20:46.615518   53006 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 23:20:46.628360   53006 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 23:20:47.326587   53006 main.go:141] libmachine: Making call to close driver server
	I0919 23:20:47.326619   53006 main.go:141] libmachine: (test-preload-227745) Calling .Close
	I0919 23:20:47.326644   53006 main.go:141] libmachine: Making call to close driver server
	I0919 23:20:47.326662   53006 main.go:141] libmachine: (test-preload-227745) Calling .Close
	I0919 23:20:47.326957   53006 main.go:141] libmachine: Successfully made call to close driver server
	I0919 23:20:47.326974   53006 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 23:20:47.326976   53006 main.go:141] libmachine: Successfully made call to close driver server
	I0919 23:20:47.326983   53006 main.go:141] libmachine: Making call to close driver server
	I0919 23:20:47.326987   53006 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 23:20:47.326991   53006 main.go:141] libmachine: (test-preload-227745) Calling .Close
	I0919 23:20:47.327011   53006 main.go:141] libmachine: Making call to close driver server
	I0919 23:20:47.327019   53006 main.go:141] libmachine: (test-preload-227745) Calling .Close
	I0919 23:20:47.327181   53006 main.go:141] libmachine: Successfully made call to close driver server
	I0919 23:20:47.327194   53006 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 23:20:47.327233   53006 main.go:141] libmachine: Successfully made call to close driver server
	I0919 23:20:47.327243   53006 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 23:20:47.336358   53006 main.go:141] libmachine: Making call to close driver server
	I0919 23:20:47.336382   53006 main.go:141] libmachine: (test-preload-227745) Calling .Close
	I0919 23:20:47.336664   53006 main.go:141] libmachine: Successfully made call to close driver server
	I0919 23:20:47.336676   53006 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 23:20:47.336700   53006 main.go:141] libmachine: (test-preload-227745) DBG | Closing plugin on server side
	I0919 23:20:47.338489   53006 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0919 23:20:47.339577   53006 addons.go:514] duration metric: took 1.111740637s for enable addons: enabled=[storage-provisioner default-storageclass]
	W0919 23:20:48.501429   53006 node_ready.go:57] node "test-preload-227745" has "Ready":"False" status (will retry)
	W0919 23:20:51.002078   53006 node_ready.go:57] node "test-preload-227745" has "Ready":"False" status (will retry)
	W0919 23:20:53.004739   53006 node_ready.go:57] node "test-preload-227745" has "Ready":"False" status (will retry)
	I0919 23:20:55.002232   53006 node_ready.go:49] node "test-preload-227745" is "Ready"
	I0919 23:20:55.002261   53006 node_ready.go:38] duration metric: took 8.504829692s for node "test-preload-227745" to be "Ready" ...
	I0919 23:20:55.002288   53006 api_server.go:52] waiting for apiserver process to appear ...
	I0919 23:20:55.002335   53006 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 23:20:55.027320   53006 api_server.go:72] duration metric: took 8.799517621s to wait for apiserver process to appear ...
	I0919 23:20:55.027352   53006 api_server.go:88] waiting for apiserver healthz status ...
	I0919 23:20:55.027371   53006 api_server.go:253] Checking apiserver healthz at https://192.168.39.242:8443/healthz ...
	I0919 23:20:55.033188   53006 api_server.go:279] https://192.168.39.242:8443/healthz returned 200:
	ok
	I0919 23:20:55.034129   53006 api_server.go:141] control plane version: v1.32.0
	I0919 23:20:55.034155   53006 api_server.go:131] duration metric: took 6.796433ms to wait for apiserver health ...
	I0919 23:20:55.034166   53006 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 23:20:55.038181   53006 system_pods.go:59] 7 kube-system pods found
	I0919 23:20:55.038207   53006 system_pods.go:61] "coredns-668d6bf9bc-247xs" [5d19b207-7860-495b-8517-32d3ed0aeeba] Running
	I0919 23:20:55.038212   53006 system_pods.go:61] "etcd-test-preload-227745" [2ef94cd9-7898-45fc-9ae0-e8c7aaa332ca] Running
	I0919 23:20:55.038220   53006 system_pods.go:61] "kube-apiserver-test-preload-227745" [ca6ef810-ba94-438b-a08b-4b3090fd2277] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 23:20:55.038230   53006 system_pods.go:61] "kube-controller-manager-test-preload-227745" [06d634c3-c8da-4825-9014-19b6b5ec15bd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 23:20:55.038236   53006 system_pods.go:61] "kube-proxy-p9lk5" [259605dc-9f56-4b61-8f89-560bab0084ef] Running
	I0919 23:20:55.038246   53006 system_pods.go:61] "kube-scheduler-test-preload-227745" [c455729a-7726-4425-9bf6-ab923f148e51] Running
	I0919 23:20:55.038253   53006 system_pods.go:61] "storage-provisioner" [be6f118f-b798-4baf-b817-60b6262605fb] Running
	I0919 23:20:55.038259   53006 system_pods.go:74] duration metric: took 4.086919ms to wait for pod list to return data ...
	I0919 23:20:55.038267   53006 default_sa.go:34] waiting for default service account to be created ...
	I0919 23:20:55.041846   53006 default_sa.go:45] found service account: "default"
	I0919 23:20:55.041884   53006 default_sa.go:55] duration metric: took 3.609207ms for default service account to be created ...
	I0919 23:20:55.041896   53006 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 23:20:55.044613   53006 system_pods.go:86] 7 kube-system pods found
	I0919 23:20:55.044644   53006 system_pods.go:89] "coredns-668d6bf9bc-247xs" [5d19b207-7860-495b-8517-32d3ed0aeeba] Running
	I0919 23:20:55.044652   53006 system_pods.go:89] "etcd-test-preload-227745" [2ef94cd9-7898-45fc-9ae0-e8c7aaa332ca] Running
	I0919 23:20:55.044666   53006 system_pods.go:89] "kube-apiserver-test-preload-227745" [ca6ef810-ba94-438b-a08b-4b3090fd2277] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 23:20:55.044677   53006 system_pods.go:89] "kube-controller-manager-test-preload-227745" [06d634c3-c8da-4825-9014-19b6b5ec15bd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 23:20:55.044684   53006 system_pods.go:89] "kube-proxy-p9lk5" [259605dc-9f56-4b61-8f89-560bab0084ef] Running
	I0919 23:20:55.044690   53006 system_pods.go:89] "kube-scheduler-test-preload-227745" [c455729a-7726-4425-9bf6-ab923f148e51] Running
	I0919 23:20:55.044697   53006 system_pods.go:89] "storage-provisioner" [be6f118f-b798-4baf-b817-60b6262605fb] Running
	I0919 23:20:55.044714   53006 system_pods.go:126] duration metric: took 2.801974ms to wait for k8s-apps to be running ...
	I0919 23:20:55.044754   53006 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 23:20:55.044808   53006 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 23:20:55.062059   53006 system_svc.go:56] duration metric: took 17.300535ms WaitForService to wait for kubelet
	I0919 23:20:55.062092   53006 kubeadm.go:578] duration metric: took 8.834293976s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 23:20:55.062109   53006 node_conditions.go:102] verifying NodePressure condition ...
	I0919 23:20:55.068010   53006 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0919 23:20:55.068046   53006 node_conditions.go:123] node cpu capacity is 2
	I0919 23:20:55.068062   53006 node_conditions.go:105] duration metric: took 5.94763ms to run NodePressure ...
	I0919 23:20:55.068078   53006 start.go:241] waiting for startup goroutines ...
	I0919 23:20:55.068090   53006 start.go:246] waiting for cluster config update ...
	I0919 23:20:55.068107   53006 start.go:255] writing updated cluster config ...
	I0919 23:20:55.068461   53006 ssh_runner.go:195] Run: rm -f paused
	I0919 23:20:55.074217   53006 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 23:20:55.074817   53006 kapi.go:59] client config for test-preload-227745: &rest.Config{Host:"https://192.168.39.242:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-14764/.minikube/profiles/test-preload-227745/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-14764/.minikube/profiles/test-preload-227745/client.key", CAFile:"/home/jenkins/minikube-integration/21594-14764/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(
nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 23:20:55.078423   53006 pod_ready.go:83] waiting for pod "coredns-668d6bf9bc-247xs" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:20:55.083804   53006 pod_ready.go:94] pod "coredns-668d6bf9bc-247xs" is "Ready"
	I0919 23:20:55.083827   53006 pod_ready.go:86] duration metric: took 5.384569ms for pod "coredns-668d6bf9bc-247xs" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:20:55.086433   53006 pod_ready.go:83] waiting for pod "etcd-test-preload-227745" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:20:55.091741   53006 pod_ready.go:94] pod "etcd-test-preload-227745" is "Ready"
	I0919 23:20:55.091770   53006 pod_ready.go:86] duration metric: took 5.316727ms for pod "etcd-test-preload-227745" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:20:55.094152   53006 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-227745" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:20:56.100961   53006 pod_ready.go:94] pod "kube-apiserver-test-preload-227745" is "Ready"
	I0919 23:20:56.100990   53006 pod_ready.go:86] duration metric: took 1.006817212s for pod "kube-apiserver-test-preload-227745" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:20:56.103356   53006 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-227745" in "kube-system" namespace to be "Ready" or be gone ...
	W0919 23:20:58.109576   53006 pod_ready.go:104] pod "kube-controller-manager-test-preload-227745" is not "Ready", error: <nil>
	I0919 23:20:58.609796   53006 pod_ready.go:94] pod "kube-controller-manager-test-preload-227745" is "Ready"
	I0919 23:20:58.609822   53006 pod_ready.go:86] duration metric: took 2.506446849s for pod "kube-controller-manager-test-preload-227745" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:20:58.612231   53006 pod_ready.go:83] waiting for pod "kube-proxy-p9lk5" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:20:58.879377   53006 pod_ready.go:94] pod "kube-proxy-p9lk5" is "Ready"
	I0919 23:20:58.879403   53006 pod_ready.go:86] duration metric: took 267.150043ms for pod "kube-proxy-p9lk5" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:20:59.078750   53006 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-227745" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:20:59.478830   53006 pod_ready.go:94] pod "kube-scheduler-test-preload-227745" is "Ready"
	I0919 23:20:59.478862   53006 pod_ready.go:86] duration metric: took 400.090365ms for pod "kube-scheduler-test-preload-227745" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:20:59.478875   53006 pod_ready.go:40] duration metric: took 4.404619978s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 23:20:59.522356   53006 start.go:617] kubectl: 1.34.1, cluster: 1.32.0 (minor skew: 2)
	I0919 23:20:59.524002   53006 out.go:203] 
	W0919 23:20:59.525449   53006 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.32.0.
	I0919 23:20:59.526686   53006 out.go:179]   - Want kubectl v1.32.0? Try 'minikube kubectl -- get pods -A'
	I0919 23:20:59.528065   53006 out.go:179] * Done! kubectl is now configured to use "test-preload-227745" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 19 23:21:00 test-preload-227745 crio[830]: time="2025-09-19 23:21:00.478402613Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758324060478376626,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a22af4ef-b66d-4efe-a103-85d869e00e25 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 23:21:00 test-preload-227745 crio[830]: time="2025-09-19 23:21:00.479085534Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0fea3ee4-7326-4186-bbfc-873149bb51fd name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 23:21:00 test-preload-227745 crio[830]: time="2025-09-19 23:21:00.479468900Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0fea3ee4-7326-4186-bbfc-873149bb51fd name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 23:21:00 test-preload-227745 crio[830]: time="2025-09-19 23:21:00.479954151Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6a7b345444db97ff60ef17c60e7083f2a5c176f3e6510fd1df2d2601c2630e04,PodSandboxId:f8db2908e3c85c449c501472018970758f991d4a53b41998daaebd906e3bceaa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1758324052840520064,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-247xs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d19b207-7860-495b-8517-32d3ed0aeeba,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c73c25e36fa42fd965bd6fb0003461aa27e44eafd8478815b08256014195e2a,PodSandboxId:ef29871083206daae0d7389b694de3457c75fa88a90936488e9446ceebfdeb21,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1758324045195791149,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p9lk5,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 259605dc-9f56-4b61-8f89-560bab0084ef,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e383e781c33929f077a9fc0c426614b0dbbd2251b689f6ae541f39532a3e94eb,PodSandboxId:8c5cede3c0ff61e5948bb7afa638b3d3de224f0ef82aaf20e666de37e5e45373,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1758324045202130958,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be
6f118f-b798-4baf-b817-60b6262605fb,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83a4c1d1a66028c61a2d9463adab37f387ca72e4f39466fa569b57a8544ffe8c,PodSandboxId:66286732da09758bfb6934a3d8aee8091fefb104561553e9957af28741103f1f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1758324040815975805,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-227745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66738da742b9679acbb34291cc8404b5,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f27d7951879665bea07a89328b268cace80e1ace15b922b83beb7306941236b,PodSandboxId:969a401672d669ce851428687df4b9bfc4f3760bd05f8169ea812917e664253e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1758324040806101641,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-227745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1416a8da336950f53d9b8b7
d62d42579,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec2e69ebeb9730774e5c032de5a6cbd01f5c6e0d9d65bd47ecc7237ac48b0f08,PodSandboxId:50e1ac3fc741f2aca81e8b3cea51afcc0917c28ec6e376dc8bb7c9d4ca1e9725,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1758324040771473993,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-227745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbba724b865a0b7baaba0ea2df84c7f3,}
,Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c10e4f778495c71a1e8acaf513655330cea976c4351b46849c91e425769bd406,PodSandboxId:b6ca8a569ebc38f7f439ddfb77aeaaaac3b43f74cfec2bf627431d732552d095,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1758324040752094614,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-227745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0aeaa87217e82cd277e6b8ba379cea8,},Annotation
s:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0fea3ee4-7326-4186-bbfc-873149bb51fd name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 23:21:00 test-preload-227745 crio[830]: time="2025-09-19 23:21:00.522277519Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=24392459-fb7a-4a2c-a620-50607623b08f name=/runtime.v1.RuntimeService/Version
	Sep 19 23:21:00 test-preload-227745 crio[830]: time="2025-09-19 23:21:00.522366928Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=24392459-fb7a-4a2c-a620-50607623b08f name=/runtime.v1.RuntimeService/Version
	Sep 19 23:21:00 test-preload-227745 crio[830]: time="2025-09-19 23:21:00.523564201Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fbfbdedf-5e23-42c0-8701-ac3bff2db4d3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 23:21:00 test-preload-227745 crio[830]: time="2025-09-19 23:21:00.524087945Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758324060524063872,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fbfbdedf-5e23-42c0-8701-ac3bff2db4d3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 23:21:00 test-preload-227745 crio[830]: time="2025-09-19 23:21:00.524792265Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7fd0997e-9835-4f87-909e-3b4adf058a65 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 23:21:00 test-preload-227745 crio[830]: time="2025-09-19 23:21:00.524863657Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7fd0997e-9835-4f87-909e-3b4adf058a65 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 23:21:00 test-preload-227745 crio[830]: time="2025-09-19 23:21:00.525114809Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6a7b345444db97ff60ef17c60e7083f2a5c176f3e6510fd1df2d2601c2630e04,PodSandboxId:f8db2908e3c85c449c501472018970758f991d4a53b41998daaebd906e3bceaa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1758324052840520064,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-247xs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d19b207-7860-495b-8517-32d3ed0aeeba,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c73c25e36fa42fd965bd6fb0003461aa27e44eafd8478815b08256014195e2a,PodSandboxId:ef29871083206daae0d7389b694de3457c75fa88a90936488e9446ceebfdeb21,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1758324045195791149,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p9lk5,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 259605dc-9f56-4b61-8f89-560bab0084ef,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e383e781c33929f077a9fc0c426614b0dbbd2251b689f6ae541f39532a3e94eb,PodSandboxId:8c5cede3c0ff61e5948bb7afa638b3d3de224f0ef82aaf20e666de37e5e45373,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1758324045202130958,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be
6f118f-b798-4baf-b817-60b6262605fb,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83a4c1d1a66028c61a2d9463adab37f387ca72e4f39466fa569b57a8544ffe8c,PodSandboxId:66286732da09758bfb6934a3d8aee8091fefb104561553e9957af28741103f1f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1758324040815975805,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-227745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66738da742b9679acbb34291cc8404b5,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f27d7951879665bea07a89328b268cace80e1ace15b922b83beb7306941236b,PodSandboxId:969a401672d669ce851428687df4b9bfc4f3760bd05f8169ea812917e664253e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1758324040806101641,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-227745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1416a8da336950f53d9b8b7
d62d42579,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec2e69ebeb9730774e5c032de5a6cbd01f5c6e0d9d65bd47ecc7237ac48b0f08,PodSandboxId:50e1ac3fc741f2aca81e8b3cea51afcc0917c28ec6e376dc8bb7c9d4ca1e9725,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1758324040771473993,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-227745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbba724b865a0b7baaba0ea2df84c7f3,}
,Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c10e4f778495c71a1e8acaf513655330cea976c4351b46849c91e425769bd406,PodSandboxId:b6ca8a569ebc38f7f439ddfb77aeaaaac3b43f74cfec2bf627431d732552d095,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1758324040752094614,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-227745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0aeaa87217e82cd277e6b8ba379cea8,},Annotation
s:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7fd0997e-9835-4f87-909e-3b4adf058a65 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 23:21:00 test-preload-227745 crio[830]: time="2025-09-19 23:21:00.566875198Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=776ae73f-bf58-45db-beda-25745ac7758d name=/runtime.v1.RuntimeService/Version
	Sep 19 23:21:00 test-preload-227745 crio[830]: time="2025-09-19 23:21:00.567039963Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=776ae73f-bf58-45db-beda-25745ac7758d name=/runtime.v1.RuntimeService/Version
	Sep 19 23:21:00 test-preload-227745 crio[830]: time="2025-09-19 23:21:00.568665361Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5df4fec0-8ae4-4f73-9c30-488e1ce0c699 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 23:21:00 test-preload-227745 crio[830]: time="2025-09-19 23:21:00.569403296Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758324060569353435,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5df4fec0-8ae4-4f73-9c30-488e1ce0c699 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 23:21:00 test-preload-227745 crio[830]: time="2025-09-19 23:21:00.570101902Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=eb80080c-a0f6-4fcf-a041-97cbb9a5276a name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 23:21:00 test-preload-227745 crio[830]: time="2025-09-19 23:21:00.570315340Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=eb80080c-a0f6-4fcf-a041-97cbb9a5276a name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 23:21:00 test-preload-227745 crio[830]: time="2025-09-19 23:21:00.570529214Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6a7b345444db97ff60ef17c60e7083f2a5c176f3e6510fd1df2d2601c2630e04,PodSandboxId:f8db2908e3c85c449c501472018970758f991d4a53b41998daaebd906e3bceaa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1758324052840520064,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-247xs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d19b207-7860-495b-8517-32d3ed0aeeba,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c73c25e36fa42fd965bd6fb0003461aa27e44eafd8478815b08256014195e2a,PodSandboxId:ef29871083206daae0d7389b694de3457c75fa88a90936488e9446ceebfdeb21,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1758324045195791149,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p9lk5,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 259605dc-9f56-4b61-8f89-560bab0084ef,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e383e781c33929f077a9fc0c426614b0dbbd2251b689f6ae541f39532a3e94eb,PodSandboxId:8c5cede3c0ff61e5948bb7afa638b3d3de224f0ef82aaf20e666de37e5e45373,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1758324045202130958,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be
6f118f-b798-4baf-b817-60b6262605fb,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83a4c1d1a66028c61a2d9463adab37f387ca72e4f39466fa569b57a8544ffe8c,PodSandboxId:66286732da09758bfb6934a3d8aee8091fefb104561553e9957af28741103f1f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1758324040815975805,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-227745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66738da742b9679acbb34291cc8404b5,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f27d7951879665bea07a89328b268cace80e1ace15b922b83beb7306941236b,PodSandboxId:969a401672d669ce851428687df4b9bfc4f3760bd05f8169ea812917e664253e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1758324040806101641,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-227745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1416a8da336950f53d9b8b7
d62d42579,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec2e69ebeb9730774e5c032de5a6cbd01f5c6e0d9d65bd47ecc7237ac48b0f08,PodSandboxId:50e1ac3fc741f2aca81e8b3cea51afcc0917c28ec6e376dc8bb7c9d4ca1e9725,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1758324040771473993,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-227745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbba724b865a0b7baaba0ea2df84c7f3,}
,Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c10e4f778495c71a1e8acaf513655330cea976c4351b46849c91e425769bd406,PodSandboxId:b6ca8a569ebc38f7f439ddfb77aeaaaac3b43f74cfec2bf627431d732552d095,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1758324040752094614,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-227745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0aeaa87217e82cd277e6b8ba379cea8,},Annotation
s:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=eb80080c-a0f6-4fcf-a041-97cbb9a5276a name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 23:21:00 test-preload-227745 crio[830]: time="2025-09-19 23:21:00.609321783Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=060482e5-8249-4d78-93e2-09ac2e3e11c6 name=/runtime.v1.RuntimeService/Version
	Sep 19 23:21:00 test-preload-227745 crio[830]: time="2025-09-19 23:21:00.609417343Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=060482e5-8249-4d78-93e2-09ac2e3e11c6 name=/runtime.v1.RuntimeService/Version
	Sep 19 23:21:00 test-preload-227745 crio[830]: time="2025-09-19 23:21:00.610875477Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4cec9f50-90de-4fe9-a458-71397f0250d0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 23:21:00 test-preload-227745 crio[830]: time="2025-09-19 23:21:00.612367266Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758324060612265802,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4cec9f50-90de-4fe9-a458-71397f0250d0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 23:21:00 test-preload-227745 crio[830]: time="2025-09-19 23:21:00.613575971Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c1f2f20b-6f4e-4b43-9c99-5707e9c6e9ea name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 23:21:00 test-preload-227745 crio[830]: time="2025-09-19 23:21:00.613682214Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c1f2f20b-6f4e-4b43-9c99-5707e9c6e9ea name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 23:21:00 test-preload-227745 crio[830]: time="2025-09-19 23:21:00.613839076Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6a7b345444db97ff60ef17c60e7083f2a5c176f3e6510fd1df2d2601c2630e04,PodSandboxId:f8db2908e3c85c449c501472018970758f991d4a53b41998daaebd906e3bceaa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1758324052840520064,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-247xs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d19b207-7860-495b-8517-32d3ed0aeeba,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c73c25e36fa42fd965bd6fb0003461aa27e44eafd8478815b08256014195e2a,PodSandboxId:ef29871083206daae0d7389b694de3457c75fa88a90936488e9446ceebfdeb21,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1758324045195791149,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p9lk5,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 259605dc-9f56-4b61-8f89-560bab0084ef,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e383e781c33929f077a9fc0c426614b0dbbd2251b689f6ae541f39532a3e94eb,PodSandboxId:8c5cede3c0ff61e5948bb7afa638b3d3de224f0ef82aaf20e666de37e5e45373,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1758324045202130958,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be
6f118f-b798-4baf-b817-60b6262605fb,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83a4c1d1a66028c61a2d9463adab37f387ca72e4f39466fa569b57a8544ffe8c,PodSandboxId:66286732da09758bfb6934a3d8aee8091fefb104561553e9957af28741103f1f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1758324040815975805,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-227745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66738da742b9679acbb34291cc8404b5,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f27d7951879665bea07a89328b268cace80e1ace15b922b83beb7306941236b,PodSandboxId:969a401672d669ce851428687df4b9bfc4f3760bd05f8169ea812917e664253e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1758324040806101641,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-227745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1416a8da336950f53d9b8b7
d62d42579,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec2e69ebeb9730774e5c032de5a6cbd01f5c6e0d9d65bd47ecc7237ac48b0f08,PodSandboxId:50e1ac3fc741f2aca81e8b3cea51afcc0917c28ec6e376dc8bb7c9d4ca1e9725,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1758324040771473993,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-227745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbba724b865a0b7baaba0ea2df84c7f3,}
,Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c10e4f778495c71a1e8acaf513655330cea976c4351b46849c91e425769bd406,PodSandboxId:b6ca8a569ebc38f7f439ddfb77aeaaaac3b43f74cfec2bf627431d732552d095,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1758324040752094614,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-227745,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0aeaa87217e82cd277e6b8ba379cea8,},Annotation
s:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c1f2f20b-6f4e-4b43-9c99-5707e9c6e9ea name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6a7b345444db9       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   7 seconds ago       Running             coredns                   1                   f8db2908e3c85       coredns-668d6bf9bc-247xs
	e383e781c3392       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 seconds ago      Running             storage-provisioner       1                   8c5cede3c0ff6       storage-provisioner
	1c73c25e36fa4       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08   15 seconds ago      Running             kube-proxy                1                   ef29871083206       kube-proxy-p9lk5
	83a4c1d1a6602       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   19 seconds ago      Running             etcd                      1                   66286732da097       etcd-test-preload-227745
	9f27d79518796       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3   19 seconds ago      Running             kube-controller-manager   1                   969a401672d66       kube-controller-manager-test-preload-227745
	ec2e69ebeb973       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4   19 seconds ago      Running             kube-apiserver            1                   50e1ac3fc741f       kube-apiserver-test-preload-227745
	c10e4f778495c       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5   19 seconds ago      Running             kube-scheduler            1                   b6ca8a569ebc3       kube-scheduler-test-preload-227745
	
	
	==> coredns [6a7b345444db97ff60ef17c60e7083f2a5c176f3e6510fd1df2d2601c2630e04] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:51824 - 30757 "HINFO IN 8015629472165032222.1800431320189654985. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.065244034s
	
	
	==> describe nodes <==
	Name:               test-preload-227745
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-227745
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=test-preload-227745
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_19T23_19_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 23:19:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-227745
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 23:20:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 23:20:54 +0000   Fri, 19 Sep 2025 23:19:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 23:20:54 +0000   Fri, 19 Sep 2025 23:19:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 23:20:54 +0000   Fri, 19 Sep 2025 23:19:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 23:20:54 +0000   Fri, 19 Sep 2025 23:20:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.242
	  Hostname:    test-preload-227745
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042708Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042708Ki
	  pods:               110
	System Info:
	  Machine ID:                 71161acb9fd64a19b8a94d69541ffe74
	  System UUID:                71161acb-9fd6-4a19-b8a9-4d69541ffe74
	  Boot ID:                    9ca3d21a-d738-4235-b7fe-e6944915e8ab
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-247xs                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     96s
	  kube-system                 etcd-test-preload-227745                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         102s
	  kube-system                 kube-apiserver-test-preload-227745             250m (12%)    0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 kube-controller-manager-test-preload-227745    200m (10%)    0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 kube-proxy-p9lk5                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 kube-scheduler-test-preload-227745             100m (5%)     0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 95s                kube-proxy       
	  Normal   Starting                 15s                kube-proxy       
	  Normal   Starting                 101s               kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  101s               kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  100s               kubelet          Node test-preload-227745 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    100s               kubelet          Node test-preload-227745 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     100s               kubelet          Node test-preload-227745 status is now: NodeHasSufficientPID
	  Normal   NodeReady                100s               kubelet          Node test-preload-227745 status is now: NodeReady
	  Normal   RegisteredNode           97s                node-controller  Node test-preload-227745 event: Registered Node test-preload-227745 in Controller
	  Normal   Starting                 21s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  21s (x8 over 21s)  kubelet          Node test-preload-227745 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    21s (x8 over 21s)  kubelet          Node test-preload-227745 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     21s (x7 over 21s)  kubelet          Node test-preload-227745 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  21s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 16s                kubelet          Node test-preload-227745 has been rebooted, boot id: 9ca3d21a-d738-4235-b7fe-e6944915e8ab
	  Normal   RegisteredNode           13s                node-controller  Node test-preload-227745 event: Registered Node test-preload-227745 in Controller
	
	
	==> dmesg <==
	[Sep19 23:20] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001639] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.005460] (rpcbind)[121]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.002639] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000019] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.090956] kauditd_printk_skb: 4 callbacks suppressed
	[  +0.099592] kauditd_printk_skb: 102 callbacks suppressed
	[  +5.486418] kauditd_printk_skb: 177 callbacks suppressed
	[  +0.000079] kauditd_printk_skb: 128 callbacks suppressed
	[  +5.620963] kauditd_printk_skb: 65 callbacks suppressed
	
	
	==> etcd [83a4c1d1a66028c61a2d9463adab37f387ca72e4f39466fa569b57a8544ffe8c] <==
	{"level":"info","ts":"2025-09-19T23:20:41.288600Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5245f38ecce3eccc switched to configuration voters=(5928412279151520972)"}
	{"level":"info","ts":"2025-09-19T23:20:41.291241Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9dd55050173e419e","local-member-id":"5245f38ecce3eccc","added-peer-id":"5245f38ecce3eccc","added-peer-peer-urls":["https://192.168.39.242:2380"]}
	{"level":"info","ts":"2025-09-19T23:20:41.291367Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9dd55050173e419e","local-member-id":"5245f38ecce3eccc","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-19T23:20:41.291408Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-19T23:20:41.296142Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-09-19T23:20:41.301637Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"5245f38ecce3eccc","initial-advertise-peer-urls":["https://192.168.39.242:2380"],"listen-peer-urls":["https://192.168.39.242:2380"],"advertise-client-urls":["https://192.168.39.242:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.242:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-09-19T23:20:41.301710Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-09-19T23:20:41.297843Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.39.242:2380"}
	{"level":"info","ts":"2025-09-19T23:20:41.301755Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.39.242:2380"}
	{"level":"info","ts":"2025-09-19T23:20:43.153132Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5245f38ecce3eccc is starting a new election at term 2"}
	{"level":"info","ts":"2025-09-19T23:20:43.153201Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5245f38ecce3eccc became pre-candidate at term 2"}
	{"level":"info","ts":"2025-09-19T23:20:43.153240Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5245f38ecce3eccc received MsgPreVoteResp from 5245f38ecce3eccc at term 2"}
	{"level":"info","ts":"2025-09-19T23:20:43.153254Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5245f38ecce3eccc became candidate at term 3"}
	{"level":"info","ts":"2025-09-19T23:20:43.153265Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5245f38ecce3eccc received MsgVoteResp from 5245f38ecce3eccc at term 3"}
	{"level":"info","ts":"2025-09-19T23:20:43.153274Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5245f38ecce3eccc became leader at term 3"}
	{"level":"info","ts":"2025-09-19T23:20:43.153281Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 5245f38ecce3eccc elected leader 5245f38ecce3eccc at term 3"}
	{"level":"info","ts":"2025-09-19T23:20:43.154817Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"5245f38ecce3eccc","local-member-attributes":"{Name:test-preload-227745 ClientURLs:[https://192.168.39.242:2379]}","request-path":"/0/members/5245f38ecce3eccc/attributes","cluster-id":"9dd55050173e419e","publish-timeout":"7s"}
	{"level":"info","ts":"2025-09-19T23:20:43.155035Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-19T23:20:43.155192Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-19T23:20:43.156030Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-09-19T23:20:43.156036Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-09-19T23:20:43.156048Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-09-19T23:20:43.156408Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-09-19T23:20:43.156594Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-09-19T23:20:43.157086Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.242:2379"}
	
	
	==> kernel <==
	 23:21:00 up 0 min,  0 users,  load average: 1.07, 0.31, 0.10
	Linux test-preload-227745 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Sep  9 02:24:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [ec2e69ebeb9730774e5c032de5a6cbd01f5c6e0d9d65bd47ecc7237ac48b0f08] <==
	I0919 23:20:44.407969       1 shared_informer.go:320] Caches are synced for configmaps
	I0919 23:20:44.408561       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0919 23:20:44.413804       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0919 23:20:44.421096       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0919 23:20:44.422201       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0919 23:20:44.422287       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0919 23:20:44.422490       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0919 23:20:44.425474       1 aggregator.go:171] initial CRD sync complete...
	I0919 23:20:44.425961       1 autoregister_controller.go:144] Starting autoregister controller
	I0919 23:20:44.426002       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0919 23:20:44.426019       1 cache.go:39] Caches are synced for autoregister controller
	I0919 23:20:44.437120       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0919 23:20:44.438500       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E0919 23:20:44.451161       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0919 23:20:44.460319       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0919 23:20:44.477702       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0919 23:20:44.775632       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0919 23:20:45.311843       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0919 23:20:46.048465       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0919 23:20:46.085286       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0919 23:20:46.120054       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0919 23:20:46.128233       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0919 23:20:47.918386       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0919 23:20:48.072024       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0919 23:20:48.119781       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [9f27d7951879665bea07a89328b268cace80e1ace15b922b83beb7306941236b] <==
	I0919 23:20:47.619754       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0919 23:20:47.619829       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0919 23:20:47.620555       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0919 23:20:47.620719       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0919 23:20:47.620776       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0919 23:20:47.624998       1 shared_informer.go:320] Caches are synced for PVC protection
	I0919 23:20:47.626225       1 shared_informer.go:320] Caches are synced for resource quota
	I0919 23:20:47.627403       1 shared_informer.go:320] Caches are synced for resource quota
	I0919 23:20:47.628882       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0919 23:20:47.638096       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-227745"
	I0919 23:20:47.656186       1 shared_informer.go:320] Caches are synced for garbage collector
	I0919 23:20:47.664990       1 shared_informer.go:320] Caches are synced for stateful set
	I0919 23:20:47.665191       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0919 23:20:47.665291       1 shared_informer.go:320] Caches are synced for disruption
	I0919 23:20:47.666091       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0919 23:20:47.667519       1 shared_informer.go:320] Caches are synced for ephemeral
	I0919 23:20:47.667627       1 shared_informer.go:320] Caches are synced for endpoint
	I0919 23:20:48.079232       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="413.73252ms"
	I0919 23:20:48.080259       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="36.405µs"
	I0919 23:20:52.987229       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="48.076µs"
	I0919 23:20:54.010875       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="23.311017ms"
	I0919 23:20:54.012630       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="54.446µs"
	I0919 23:20:54.690429       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-227745"
	I0919 23:20:54.702769       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-227745"
	I0919 23:20:57.617544       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [1c73c25e36fa42fd965bd6fb0003461aa27e44eafd8478815b08256014195e2a] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0919 23:20:45.437311       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0919 23:20:45.454964       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.242"]
	E0919 23:20:45.456207       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 23:20:45.528050       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0919 23:20:45.528165       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0919 23:20:45.528205       1 server_linux.go:170] "Using iptables Proxier"
	I0919 23:20:45.531630       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 23:20:45.532041       1 server.go:497] "Version info" version="v1.32.0"
	I0919 23:20:45.532091       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 23:20:45.557396       1 config.go:329] "Starting node config controller"
	I0919 23:20:45.557827       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0919 23:20:45.560157       1 config.go:199] "Starting service config controller"
	I0919 23:20:45.560210       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0919 23:20:45.560249       1 config.go:105] "Starting endpoint slice config controller"
	I0919 23:20:45.560265       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0919 23:20:45.658405       1 shared_informer.go:320] Caches are synced for node config
	I0919 23:20:45.660982       1 shared_informer.go:320] Caches are synced for service config
	I0919 23:20:45.661141       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [c10e4f778495c71a1e8acaf513655330cea976c4351b46849c91e425769bd406] <==
	I0919 23:20:41.753124       1 serving.go:386] Generated self-signed cert in-memory
	W0919 23:20:44.379163       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0919 23:20:44.380597       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0919 23:20:44.380722       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0919 23:20:44.380747       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0919 23:20:44.422428       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.0"
	I0919 23:20:44.422514       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 23:20:44.433764       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 23:20:44.434276       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0919 23:20:44.434296       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0919 23:20:44.435124       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0919 23:20:44.536076       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 19 23:20:44 test-preload-227745 kubelet[1157]: I0919 23:20:44.742844    1157 apiserver.go:52] "Watching apiserver"
	Sep 19 23:20:44 test-preload-227745 kubelet[1157]: E0919 23:20:44.750861    1157 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-247xs" podUID="5d19b207-7860-495b-8517-32d3ed0aeeba"
	Sep 19 23:20:44 test-preload-227745 kubelet[1157]: I0919 23:20:44.766683    1157 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Sep 19 23:20:44 test-preload-227745 kubelet[1157]: I0919 23:20:44.768294    1157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/be6f118f-b798-4baf-b817-60b6262605fb-tmp\") pod \"storage-provisioner\" (UID: \"be6f118f-b798-4baf-b817-60b6262605fb\") " pod="kube-system/storage-provisioner"
	Sep 19 23:20:44 test-preload-227745 kubelet[1157]: I0919 23:20:44.769052    1157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/259605dc-9f56-4b61-8f89-560bab0084ef-xtables-lock\") pod \"kube-proxy-p9lk5\" (UID: \"259605dc-9f56-4b61-8f89-560bab0084ef\") " pod="kube-system/kube-proxy-p9lk5"
	Sep 19 23:20:44 test-preload-227745 kubelet[1157]: I0919 23:20:44.769167    1157 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/259605dc-9f56-4b61-8f89-560bab0084ef-lib-modules\") pod \"kube-proxy-p9lk5\" (UID: \"259605dc-9f56-4b61-8f89-560bab0084ef\") " pod="kube-system/kube-proxy-p9lk5"
	Sep 19 23:20:44 test-preload-227745 kubelet[1157]: E0919 23:20:44.769243    1157 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 19 23:20:44 test-preload-227745 kubelet[1157]: E0919 23:20:44.769309    1157 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5d19b207-7860-495b-8517-32d3ed0aeeba-config-volume podName:5d19b207-7860-495b-8517-32d3ed0aeeba nodeName:}" failed. No retries permitted until 2025-09-19 23:20:45.269283326 +0000 UTC m=+5.625667689 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5d19b207-7860-495b-8517-32d3ed0aeeba-config-volume") pod "coredns-668d6bf9bc-247xs" (UID: "5d19b207-7860-495b-8517-32d3ed0aeeba") : object "kube-system"/"coredns" not registered
	Sep 19 23:20:44 test-preload-227745 kubelet[1157]: E0919 23:20:44.847726    1157 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Sep 19 23:20:44 test-preload-227745 kubelet[1157]: I0919 23:20:44.921505    1157 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-test-preload-227745"
	Sep 19 23:20:44 test-preload-227745 kubelet[1157]: I0919 23:20:44.922198    1157 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-test-preload-227745"
	Sep 19 23:20:44 test-preload-227745 kubelet[1157]: E0919 23:20:44.934883    1157 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-test-preload-227745\" already exists" pod="kube-system/etcd-test-preload-227745"
	Sep 19 23:20:44 test-preload-227745 kubelet[1157]: E0919 23:20:44.936142    1157 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-test-preload-227745\" already exists" pod="kube-system/kube-scheduler-test-preload-227745"
	Sep 19 23:20:45 test-preload-227745 kubelet[1157]: E0919 23:20:45.273367    1157 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 19 23:20:45 test-preload-227745 kubelet[1157]: E0919 23:20:45.273448    1157 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5d19b207-7860-495b-8517-32d3ed0aeeba-config-volume podName:5d19b207-7860-495b-8517-32d3ed0aeeba nodeName:}" failed. No retries permitted until 2025-09-19 23:20:46.273435362 +0000 UTC m=+6.629819740 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5d19b207-7860-495b-8517-32d3ed0aeeba-config-volume") pod "coredns-668d6bf9bc-247xs" (UID: "5d19b207-7860-495b-8517-32d3ed0aeeba") : object "kube-system"/"coredns" not registered
	Sep 19 23:20:46 test-preload-227745 kubelet[1157]: E0919 23:20:46.281490    1157 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 19 23:20:46 test-preload-227745 kubelet[1157]: E0919 23:20:46.281593    1157 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5d19b207-7860-495b-8517-32d3ed0aeeba-config-volume podName:5d19b207-7860-495b-8517-32d3ed0aeeba nodeName:}" failed. No retries permitted until 2025-09-19 23:20:48.281576939 +0000 UTC m=+8.637961314 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5d19b207-7860-495b-8517-32d3ed0aeeba-config-volume") pod "coredns-668d6bf9bc-247xs" (UID: "5d19b207-7860-495b-8517-32d3ed0aeeba") : object "kube-system"/"coredns" not registered
	Sep 19 23:20:46 test-preload-227745 kubelet[1157]: E0919 23:20:46.797971    1157 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-247xs" podUID="5d19b207-7860-495b-8517-32d3ed0aeeba"
	Sep 19 23:20:48 test-preload-227745 kubelet[1157]: E0919 23:20:48.299277    1157 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 19 23:20:48 test-preload-227745 kubelet[1157]: E0919 23:20:48.299376    1157 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5d19b207-7860-495b-8517-32d3ed0aeeba-config-volume podName:5d19b207-7860-495b-8517-32d3ed0aeeba nodeName:}" failed. No retries permitted until 2025-09-19 23:20:52.299358423 +0000 UTC m=+12.655742797 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5d19b207-7860-495b-8517-32d3ed0aeeba-config-volume") pod "coredns-668d6bf9bc-247xs" (UID: "5d19b207-7860-495b-8517-32d3ed0aeeba") : object "kube-system"/"coredns" not registered
	Sep 19 23:20:48 test-preload-227745 kubelet[1157]: E0919 23:20:48.797754    1157 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-247xs" podUID="5d19b207-7860-495b-8517-32d3ed0aeeba"
	Sep 19 23:20:49 test-preload-227745 kubelet[1157]: E0919 23:20:49.856885    1157 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758324049853271932,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 23:20:49 test-preload-227745 kubelet[1157]: E0919 23:20:49.857513    1157 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758324049853271932,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 23:20:59 test-preload-227745 kubelet[1157]: E0919 23:20:59.860272    1157 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758324059859439379,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 23:20:59 test-preload-227745 kubelet[1157]: E0919 23:20:59.860320    1157 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758324059859439379,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [e383e781c33929f077a9fc0c426614b0dbbd2251b689f6ae541f39532a3e94eb] <==
	I0919 23:20:45.299572       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-227745 -n test-preload-227745
helpers_test.go:269: (dbg) Run:  kubectl --context test-preload-227745 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-227745" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-227745
--- FAIL: TestPreload (154.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.53s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-dscz6" [92d8e2bb-d9b6-4e61-8313-c3c386feb5dd] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:337: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:272: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-304197 -n default-k8s-diff-port-304197
start_stop_delete_test.go:272: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2025-09-19 23:45:46.893306036 +0000 UTC m=+5502.291280230
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context default-k8s-diff-port-304197 describe po kubernetes-dashboard-855c9754f9-dscz6 -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) kubectl --context default-k8s-diff-port-304197 describe po kubernetes-dashboard-855c9754f9-dscz6 -n kubernetes-dashboard:
Name:             kubernetes-dashboard-855c9754f9-dscz6
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             default-k8s-diff-port-304197/192.168.39.80
Start Time:       Fri, 19 Sep 2025 23:36:40 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=855c9754f9
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:           10.244.0.7
Controlled By:  ReplicaSet/kubernetes-dashboard-855c9754f9
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kvpxm (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-kvpxm:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  9m6s                   default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dscz6 to default-k8s-diff-port-304197
Warning  Failed     6m16s                  kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    3m30s (x5 over 9m5s)   kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     2m59s (x4 over 8m29s)  kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": fetching target platform image selected from manifest list: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     2m59s (x5 over 8m29s)  kubelet            Error: ErrImagePull
Warning  Failed     100s (x16 over 8m29s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    37s (x21 over 8m29s)   kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context default-k8s-diff-port-304197 logs kubernetes-dashboard-855c9754f9-dscz6 -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-304197 logs kubernetes-dashboard-855c9754f9-dscz6 -n kubernetes-dashboard: exit status 1 (81.835889ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-855c9754f9-dscz6" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:272: kubectl --context default-k8s-diff-port-304197 logs kubernetes-dashboard-855c9754f9-dscz6 -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-304197 -n default-k8s-diff-port-304197
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-304197 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-304197 logs -n 25: (1.323282055s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────┬───────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                      ARGS                                      │    PROFILE    │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────┼───────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-024908 sudo iptables -t nat -L -n -v                                 │ bridge-024908 │ jenkins │ v1.37.0 │ 19 Sep 25 23:38 UTC │ 19 Sep 25 23:38 UTC │
	│ ssh     │ -p bridge-024908 sudo systemctl status kubelet --all --full --no-pager         │ bridge-024908 │ jenkins │ v1.37.0 │ 19 Sep 25 23:38 UTC │ 19 Sep 25 23:38 UTC │
	│ ssh     │ -p bridge-024908 sudo systemctl cat kubelet --no-pager                         │ bridge-024908 │ jenkins │ v1.37.0 │ 19 Sep 25 23:38 UTC │ 19 Sep 25 23:38 UTC │
	│ ssh     │ -p bridge-024908 sudo journalctl -xeu kubelet --all --full --no-pager          │ bridge-024908 │ jenkins │ v1.37.0 │ 19 Sep 25 23:38 UTC │ 19 Sep 25 23:38 UTC │
	│ ssh     │ -p bridge-024908 sudo cat /etc/kubernetes/kubelet.conf                         │ bridge-024908 │ jenkins │ v1.37.0 │ 19 Sep 25 23:38 UTC │ 19 Sep 25 23:38 UTC │
	│ ssh     │ -p bridge-024908 sudo cat /var/lib/kubelet/config.yaml                         │ bridge-024908 │ jenkins │ v1.37.0 │ 19 Sep 25 23:38 UTC │ 19 Sep 25 23:38 UTC │
	│ ssh     │ -p bridge-024908 sudo systemctl status docker --all --full --no-pager          │ bridge-024908 │ jenkins │ v1.37.0 │ 19 Sep 25 23:38 UTC │                     │
	│ ssh     │ -p bridge-024908 sudo systemctl cat docker --no-pager                          │ bridge-024908 │ jenkins │ v1.37.0 │ 19 Sep 25 23:38 UTC │ 19 Sep 25 23:38 UTC │
	│ ssh     │ -p bridge-024908 sudo cat /etc/docker/daemon.json                              │ bridge-024908 │ jenkins │ v1.37.0 │ 19 Sep 25 23:38 UTC │ 19 Sep 25 23:38 UTC │
	│ ssh     │ -p bridge-024908 sudo docker system info                                       │ bridge-024908 │ jenkins │ v1.37.0 │ 19 Sep 25 23:38 UTC │                     │
	│ ssh     │ -p bridge-024908 sudo systemctl status cri-docker --all --full --no-pager      │ bridge-024908 │ jenkins │ v1.37.0 │ 19 Sep 25 23:38 UTC │                     │
	│ ssh     │ -p bridge-024908 sudo systemctl cat cri-docker --no-pager                      │ bridge-024908 │ jenkins │ v1.37.0 │ 19 Sep 25 23:38 UTC │ 19 Sep 25 23:38 UTC │
	│ ssh     │ -p bridge-024908 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf │ bridge-024908 │ jenkins │ v1.37.0 │ 19 Sep 25 23:38 UTC │                     │
	│ ssh     │ -p bridge-024908 sudo cat /usr/lib/systemd/system/cri-docker.service           │ bridge-024908 │ jenkins │ v1.37.0 │ 19 Sep 25 23:38 UTC │ 19 Sep 25 23:38 UTC │
	│ ssh     │ -p bridge-024908 sudo cri-dockerd --version                                    │ bridge-024908 │ jenkins │ v1.37.0 │ 19 Sep 25 23:38 UTC │ 19 Sep 25 23:38 UTC │
	│ ssh     │ -p bridge-024908 sudo systemctl status containerd --all --full --no-pager      │ bridge-024908 │ jenkins │ v1.37.0 │ 19 Sep 25 23:38 UTC │                     │
	│ ssh     │ -p bridge-024908 sudo systemctl cat containerd --no-pager                      │ bridge-024908 │ jenkins │ v1.37.0 │ 19 Sep 25 23:38 UTC │ 19 Sep 25 23:38 UTC │
	│ ssh     │ -p bridge-024908 sudo cat /lib/systemd/system/containerd.service               │ bridge-024908 │ jenkins │ v1.37.0 │ 19 Sep 25 23:38 UTC │ 19 Sep 25 23:38 UTC │
	│ ssh     │ -p bridge-024908 sudo cat /etc/containerd/config.toml                          │ bridge-024908 │ jenkins │ v1.37.0 │ 19 Sep 25 23:38 UTC │ 19 Sep 25 23:38 UTC │
	│ ssh     │ -p bridge-024908 sudo containerd config dump                                   │ bridge-024908 │ jenkins │ v1.37.0 │ 19 Sep 25 23:38 UTC │ 19 Sep 25 23:38 UTC │
	│ ssh     │ -p bridge-024908 sudo systemctl status crio --all --full --no-pager            │ bridge-024908 │ jenkins │ v1.37.0 │ 19 Sep 25 23:38 UTC │ 19 Sep 25 23:38 UTC │
	│ ssh     │ -p bridge-024908 sudo systemctl cat crio --no-pager                            │ bridge-024908 │ jenkins │ v1.37.0 │ 19 Sep 25 23:38 UTC │ 19 Sep 25 23:38 UTC │
	│ ssh     │ -p bridge-024908 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;  │ bridge-024908 │ jenkins │ v1.37.0 │ 19 Sep 25 23:38 UTC │ 19 Sep 25 23:38 UTC │
	│ ssh     │ -p bridge-024908 sudo crio config                                              │ bridge-024908 │ jenkins │ v1.37.0 │ 19 Sep 25 23:38 UTC │ 19 Sep 25 23:38 UTC │
	│ delete  │ -p bridge-024908                                                               │ bridge-024908 │ jenkins │ v1.37.0 │ 19 Sep 25 23:38 UTC │ 19 Sep 25 23:38 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────┴───────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/19 23:36:38
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 23:36:38.514309   75550 out.go:360] Setting OutFile to fd 1 ...
	I0919 23:36:38.514617   75550 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 23:36:38.514630   75550 out.go:374] Setting ErrFile to fd 2...
	I0919 23:36:38.514638   75550 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 23:36:38.514987   75550 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14764/.minikube/bin
	I0919 23:36:38.515692   75550 out.go:368] Setting JSON to false
	I0919 23:36:38.517068   75550 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8325,"bootTime":1758316673,"procs":303,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 23:36:38.517143   75550 start.go:140] virtualization: kvm guest
	I0919 23:36:38.519365   75550 out.go:179] * [bridge-024908] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0919 23:36:38.520862   75550 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 23:36:38.520868   75550 notify.go:220] Checking for updates...
	I0919 23:36:38.523475   75550 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 23:36:38.524802   75550 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-14764/kubeconfig
	I0919 23:36:38.526039   75550 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14764/.minikube
	I0919 23:36:38.527638   75550 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 23:36:38.528915   75550 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 23:36:38.530830   75550 config.go:182] Loaded profile config "default-k8s-diff-port-304197": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 23:36:38.530968   75550 config.go:182] Loaded profile config "enable-default-cni-024908": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 23:36:38.531114   75550 config.go:182] Loaded profile config "flannel-024908": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 23:36:38.531247   75550 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 23:36:38.583429   75550 out.go:179] * Using the kvm2 driver based on user configuration
	I0919 23:36:38.584859   75550 start.go:304] selected driver: kvm2
	I0919 23:36:38.584876   75550 start.go:918] validating driver "kvm2" against <nil>
	I0919 23:36:38.584888   75550 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 23:36:38.585778   75550 install.go:51] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 23:36:38.585880   75550 install.go:123] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21594-14764/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0919 23:36:38.606707   75550 install.go:134] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is {Version:v1.37.0 Commit:6e37ee63f758843bb5fe33c3a528c564c4b83d53}
	I0919 23:36:38.606773   75550 install.go:123] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21594-14764/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0919 23:36:38.625076   75550 install.go:134] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is {Version:v1.37.0 Commit:6e37ee63f758843bb5fe33c3a528c564c4b83d53}
	I0919 23:36:38.625126   75550 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0919 23:36:38.625392   75550 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 23:36:38.625424   75550 cni.go:84] Creating CNI manager for "bridge"
	I0919 23:36:38.625431   75550 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0919 23:36:38.625506   75550 start.go:348] cluster config:
	{Name:bridge-024908 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:bridge-024908 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:
1m0s}
	I0919 23:36:38.625627   75550 iso.go:125] acquiring lock: {Name:mk21ede999fca7478b081d3e470ef3cc88b140f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 23:36:38.628622   75550 out.go:179] * Starting "bridge-024908" primary control-plane node in "bridge-024908" cluster
	I0919 23:36:38.630015   75550 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 23:36:38.630084   75550 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21594-14764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0919 23:36:38.630097   75550 cache.go:58] Caching tarball of preloaded images
	I0919 23:36:38.630290   75550 preload.go:172] Found /home/jenkins/minikube-integration/21594-14764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 23:36:38.630308   75550 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0919 23:36:38.630428   75550 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908/config.json ...
	I0919 23:36:38.630452   75550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908/config.json: {Name:mke6f75eee0e949757ac34942cba06e9beb4106a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:36:38.630642   75550 start.go:360] acquireMachinesLock for bridge-024908: {Name:mke6cd936cf5da66e4fbcd4dcd8a2d3d3cae6c7b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 23:36:38.630679   75550 start.go:364] duration metric: took 20.281µs to acquireMachinesLock for "bridge-024908"
	I0919 23:36:38.630705   75550 start.go:93] Provisioning new machine with config: &{Name:bridge-024908 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.37.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:bridge-024908 Namespa
ce:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 23:36:38.630809   75550 start.go:125] createHost starting for "" (driver="kvm2")
	I0919 23:36:35.915454   73258 node_ready.go:49] node "flannel-024908" is "Ready"
	I0919 23:36:35.915481   73258 node_ready.go:38] duration metric: took 6.005163719s for node "flannel-024908" to be "Ready" ...
	I0919 23:36:35.915494   73258 api_server.go:52] waiting for apiserver process to appear ...
	I0919 23:36:35.915547   73258 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 23:36:35.968546   73258 api_server.go:72] duration metric: took 7.171831903s to wait for apiserver process to appear ...
	I0919 23:36:35.968577   73258 api_server.go:88] waiting for apiserver healthz status ...
	I0919 23:36:35.968596   73258 api_server.go:253] Checking apiserver healthz at https://192.168.50.28:8443/healthz ...
	I0919 23:36:35.977779   73258 api_server.go:279] https://192.168.50.28:8443/healthz returned 200:
	ok
	I0919 23:36:35.979219   73258 api_server.go:141] control plane version: v1.34.0
	I0919 23:36:35.979246   73258 api_server.go:131] duration metric: took 10.661874ms to wait for apiserver health ...
	I0919 23:36:35.979256   73258 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 23:36:35.983890   73258 system_pods.go:59] 7 kube-system pods found
	I0919 23:36:35.983930   73258 system_pods.go:61] "coredns-66bc5c9577-6ff4s" [bfeeb5e4-29d3-4cac-a8d8-f50cea911379] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:36:35.983938   73258 system_pods.go:61] "etcd-flannel-024908" [750b89e8-a3d5-4298-8de2-0f8813e3f04f] Running
	I0919 23:36:35.983946   73258 system_pods.go:61] "kube-apiserver-flannel-024908" [6fd5e3be-9011-4355-ac6e-edce6a72430c] Running
	I0919 23:36:35.983952   73258 system_pods.go:61] "kube-controller-manager-flannel-024908" [991db4b7-f11c-492c-bb87-2291620f98c3] Running
	I0919 23:36:35.983965   73258 system_pods.go:61] "kube-proxy-5ch96" [1e8a306d-3338-4763-a77f-02cdca9beefa] Running
	I0919 23:36:35.983969   73258 system_pods.go:61] "kube-scheduler-flannel-024908" [4bde65b7-9774-486e-b36b-fd4f0f4ad913] Running
	I0919 23:36:35.983974   73258 system_pods.go:61] "storage-provisioner" [8e0b0c57-dc02-42f5-a76e-71f66be07956] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 23:36:35.983986   73258 system_pods.go:74] duration metric: took 4.72304ms to wait for pod list to return data ...
	I0919 23:36:35.984001   73258 default_sa.go:34] waiting for default service account to be created ...
	I0919 23:36:35.987409   73258 default_sa.go:45] found service account: "default"
	I0919 23:36:35.987437   73258 default_sa.go:55] duration metric: took 3.42862ms for default service account to be created ...
	I0919 23:36:35.987447   73258 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 23:36:35.991154   73258 system_pods.go:86] 7 kube-system pods found
	I0919 23:36:35.991189   73258 system_pods.go:89] "coredns-66bc5c9577-6ff4s" [bfeeb5e4-29d3-4cac-a8d8-f50cea911379] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:36:35.991196   73258 system_pods.go:89] "etcd-flannel-024908" [750b89e8-a3d5-4298-8de2-0f8813e3f04f] Running
	I0919 23:36:35.991206   73258 system_pods.go:89] "kube-apiserver-flannel-024908" [6fd5e3be-9011-4355-ac6e-edce6a72430c] Running
	I0919 23:36:35.991212   73258 system_pods.go:89] "kube-controller-manager-flannel-024908" [991db4b7-f11c-492c-bb87-2291620f98c3] Running
	I0919 23:36:35.991217   73258 system_pods.go:89] "kube-proxy-5ch96" [1e8a306d-3338-4763-a77f-02cdca9beefa] Running
	I0919 23:36:35.991222   73258 system_pods.go:89] "kube-scheduler-flannel-024908" [4bde65b7-9774-486e-b36b-fd4f0f4ad913] Running
	I0919 23:36:35.991233   73258 system_pods.go:89] "storage-provisioner" [8e0b0c57-dc02-42f5-a76e-71f66be07956] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 23:36:35.991258   73258 retry.go:31] will retry after 296.545363ms: missing components: kube-dns
	I0919 23:36:36.401326   73258 system_pods.go:86] 7 kube-system pods found
	I0919 23:36:36.401364   73258 system_pods.go:89] "coredns-66bc5c9577-6ff4s" [bfeeb5e4-29d3-4cac-a8d8-f50cea911379] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:36:36.401372   73258 system_pods.go:89] "etcd-flannel-024908" [750b89e8-a3d5-4298-8de2-0f8813e3f04f] Running
	I0919 23:36:36.401380   73258 system_pods.go:89] "kube-apiserver-flannel-024908" [6fd5e3be-9011-4355-ac6e-edce6a72430c] Running
	I0919 23:36:36.401386   73258 system_pods.go:89] "kube-controller-manager-flannel-024908" [991db4b7-f11c-492c-bb87-2291620f98c3] Running
	I0919 23:36:36.401396   73258 system_pods.go:89] "kube-proxy-5ch96" [1e8a306d-3338-4763-a77f-02cdca9beefa] Running
	I0919 23:36:36.401401   73258 system_pods.go:89] "kube-scheduler-flannel-024908" [4bde65b7-9774-486e-b36b-fd4f0f4ad913] Running
	I0919 23:36:36.401408   73258 system_pods.go:89] "storage-provisioner" [8e0b0c57-dc02-42f5-a76e-71f66be07956] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 23:36:36.401425   73258 retry.go:31] will retry after 318.074654ms: missing components: kube-dns
	I0919 23:36:36.730556   73258 system_pods.go:86] 7 kube-system pods found
	I0919 23:36:36.730609   73258 system_pods.go:89] "coredns-66bc5c9577-6ff4s" [bfeeb5e4-29d3-4cac-a8d8-f50cea911379] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:36:36.730618   73258 system_pods.go:89] "etcd-flannel-024908" [750b89e8-a3d5-4298-8de2-0f8813e3f04f] Running
	I0919 23:36:36.730640   73258 system_pods.go:89] "kube-apiserver-flannel-024908" [6fd5e3be-9011-4355-ac6e-edce6a72430c] Running
	I0919 23:36:36.730651   73258 system_pods.go:89] "kube-controller-manager-flannel-024908" [991db4b7-f11c-492c-bb87-2291620f98c3] Running
	I0919 23:36:36.730655   73258 system_pods.go:89] "kube-proxy-5ch96" [1e8a306d-3338-4763-a77f-02cdca9beefa] Running
	I0919 23:36:36.730659   73258 system_pods.go:89] "kube-scheduler-flannel-024908" [4bde65b7-9774-486e-b36b-fd4f0f4ad913] Running
	I0919 23:36:36.730664   73258 system_pods.go:89] "storage-provisioner" [8e0b0c57-dc02-42f5-a76e-71f66be07956] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 23:36:36.730678   73258 retry.go:31] will retry after 300.035963ms: missing components: kube-dns
	I0919 23:36:37.037282   73258 system_pods.go:86] 7 kube-system pods found
	I0919 23:36:37.037323   73258 system_pods.go:89] "coredns-66bc5c9577-6ff4s" [bfeeb5e4-29d3-4cac-a8d8-f50cea911379] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:36:37.037332   73258 system_pods.go:89] "etcd-flannel-024908" [750b89e8-a3d5-4298-8de2-0f8813e3f04f] Running
	I0919 23:36:37.037344   73258 system_pods.go:89] "kube-apiserver-flannel-024908" [6fd5e3be-9011-4355-ac6e-edce6a72430c] Running
	I0919 23:36:37.037350   73258 system_pods.go:89] "kube-controller-manager-flannel-024908" [991db4b7-f11c-492c-bb87-2291620f98c3] Running
	I0919 23:36:37.037355   73258 system_pods.go:89] "kube-proxy-5ch96" [1e8a306d-3338-4763-a77f-02cdca9beefa] Running
	I0919 23:36:37.037360   73258 system_pods.go:89] "kube-scheduler-flannel-024908" [4bde65b7-9774-486e-b36b-fd4f0f4ad913] Running
	I0919 23:36:37.037367   73258 system_pods.go:89] "storage-provisioner" [8e0b0c57-dc02-42f5-a76e-71f66be07956] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 23:36:37.037382   73258 retry.go:31] will retry after 557.978506ms: missing components: kube-dns
	I0919 23:36:37.600432   73258 system_pods.go:86] 7 kube-system pods found
	I0919 23:36:37.600472   73258 system_pods.go:89] "coredns-66bc5c9577-6ff4s" [bfeeb5e4-29d3-4cac-a8d8-f50cea911379] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:36:37.600480   73258 system_pods.go:89] "etcd-flannel-024908" [750b89e8-a3d5-4298-8de2-0f8813e3f04f] Running
	I0919 23:36:37.600488   73258 system_pods.go:89] "kube-apiserver-flannel-024908" [6fd5e3be-9011-4355-ac6e-edce6a72430c] Running
	I0919 23:36:37.600493   73258 system_pods.go:89] "kube-controller-manager-flannel-024908" [991db4b7-f11c-492c-bb87-2291620f98c3] Running
	I0919 23:36:37.600499   73258 system_pods.go:89] "kube-proxy-5ch96" [1e8a306d-3338-4763-a77f-02cdca9beefa] Running
	I0919 23:36:37.600503   73258 system_pods.go:89] "kube-scheduler-flannel-024908" [4bde65b7-9774-486e-b36b-fd4f0f4ad913] Running
	I0919 23:36:37.600508   73258 system_pods.go:89] "storage-provisioner" [8e0b0c57-dc02-42f5-a76e-71f66be07956] Running
	I0919 23:36:37.600525   73258 retry.go:31] will retry after 650.280663ms: missing components: kube-dns
	I0919 23:36:38.257373   73258 system_pods.go:86] 7 kube-system pods found
	I0919 23:36:38.257415   73258 system_pods.go:89] "coredns-66bc5c9577-6ff4s" [bfeeb5e4-29d3-4cac-a8d8-f50cea911379] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:36:38.257424   73258 system_pods.go:89] "etcd-flannel-024908" [750b89e8-a3d5-4298-8de2-0f8813e3f04f] Running
	I0919 23:36:38.257437   73258 system_pods.go:89] "kube-apiserver-flannel-024908" [6fd5e3be-9011-4355-ac6e-edce6a72430c] Running
	I0919 23:36:38.257451   73258 system_pods.go:89] "kube-controller-manager-flannel-024908" [991db4b7-f11c-492c-bb87-2291620f98c3] Running
	I0919 23:36:38.257456   73258 system_pods.go:89] "kube-proxy-5ch96" [1e8a306d-3338-4763-a77f-02cdca9beefa] Running
	I0919 23:36:38.257462   73258 system_pods.go:89] "kube-scheduler-flannel-024908" [4bde65b7-9774-486e-b36b-fd4f0f4ad913] Running
	I0919 23:36:38.257472   73258 system_pods.go:89] "storage-provisioner" [8e0b0c57-dc02-42f5-a76e-71f66be07956] Running
	I0919 23:36:38.257488   73258 retry.go:31] will retry after 900.725007ms: missing components: kube-dns
	I0919 23:36:39.166304   73258 system_pods.go:86] 7 kube-system pods found
	I0919 23:36:39.166367   73258 system_pods.go:89] "coredns-66bc5c9577-6ff4s" [bfeeb5e4-29d3-4cac-a8d8-f50cea911379] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:36:39.166379   73258 system_pods.go:89] "etcd-flannel-024908" [750b89e8-a3d5-4298-8de2-0f8813e3f04f] Running
	I0919 23:36:39.166388   73258 system_pods.go:89] "kube-apiserver-flannel-024908" [6fd5e3be-9011-4355-ac6e-edce6a72430c] Running
	I0919 23:36:39.166413   73258 system_pods.go:89] "kube-controller-manager-flannel-024908" [991db4b7-f11c-492c-bb87-2291620f98c3] Running
	I0919 23:36:39.166421   73258 system_pods.go:89] "kube-proxy-5ch96" [1e8a306d-3338-4763-a77f-02cdca9beefa] Running
	I0919 23:36:39.166426   73258 system_pods.go:89] "kube-scheduler-flannel-024908" [4bde65b7-9774-486e-b36b-fd4f0f4ad913] Running
	I0919 23:36:39.166430   73258 system_pods.go:89] "storage-provisioner" [8e0b0c57-dc02-42f5-a76e-71f66be07956] Running
	I0919 23:36:39.166447   73258 retry.go:31] will retry after 950.016778ms: missing components: kube-dns
	I0919 23:36:37.247291   73436 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0919 23:36:37.247309   73436 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0919 23:36:37.247333   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) Calling .GetSSHHostname
	I0919 23:36:37.247539   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) DBG | domain default-k8s-diff-port-304197 has defined MAC address 52:54:00:9c:dc:cc in network mk-default-k8s-diff-port-304197
	I0919 23:36:37.248720   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:dc:cc", ip: ""} in network mk-default-k8s-diff-port-304197: {Iface:virbr1 ExpiryTime:2025-09-20 00:36:14 +0000 UTC Type:0 Mac:52:54:00:9c:dc:cc Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:default-k8s-diff-port-304197 Clientid:01:52:54:00:9c:dc:cc}
	I0919 23:36:37.248764   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) DBG | domain default-k8s-diff-port-304197 has defined IP address 192.168.39.80 and MAC address 52:54:00:9c:dc:cc in network mk-default-k8s-diff-port-304197
	I0919 23:36:37.249161   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) DBG | domain default-k8s-diff-port-304197 has defined MAC address 52:54:00:9c:dc:cc in network mk-default-k8s-diff-port-304197
	I0919 23:36:37.249691   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) Calling .GetSSHPort
	I0919 23:36:37.249958   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) Calling .GetSSHKeyPath
	I0919 23:36:37.250146   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) Calling .GetSSHUsername
	I0919 23:36:37.250302   73436 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21594-14764/.minikube/machines/default-k8s-diff-port-304197/id_rsa Username:docker}
	I0919 23:36:37.250373   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:dc:cc", ip: ""} in network mk-default-k8s-diff-port-304197: {Iface:virbr1 ExpiryTime:2025-09-20 00:36:14 +0000 UTC Type:0 Mac:52:54:00:9c:dc:cc Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:default-k8s-diff-port-304197 Clientid:01:52:54:00:9c:dc:cc}
	I0919 23:36:37.250400   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) DBG | domain default-k8s-diff-port-304197 has defined IP address 192.168.39.80 and MAC address 52:54:00:9c:dc:cc in network mk-default-k8s-diff-port-304197
	I0919 23:36:37.251283   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) Calling .GetSSHPort
	I0919 23:36:37.251973   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) Calling .GetSSHKeyPath
	I0919 23:36:37.252237   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) Calling .GetSSHUsername
	I0919 23:36:37.252742   73436 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21594-14764/.minikube/machines/default-k8s-diff-port-304197/id_rsa Username:docker}
	I0919 23:36:37.253166   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) DBG | domain default-k8s-diff-port-304197 has defined MAC address 52:54:00:9c:dc:cc in network mk-default-k8s-diff-port-304197
	I0919 23:36:37.253800   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:dc:cc", ip: ""} in network mk-default-k8s-diff-port-304197: {Iface:virbr1 ExpiryTime:2025-09-20 00:36:14 +0000 UTC Type:0 Mac:52:54:00:9c:dc:cc Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:default-k8s-diff-port-304197 Clientid:01:52:54:00:9c:dc:cc}
	I0919 23:36:37.253825   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) DBG | domain default-k8s-diff-port-304197 has defined IP address 192.168.39.80 and MAC address 52:54:00:9c:dc:cc in network mk-default-k8s-diff-port-304197
	I0919 23:36:37.254089   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) Calling .GetSSHPort
	I0919 23:36:37.254274   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) Calling .GetSSHKeyPath
	I0919 23:36:37.254419   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) Calling .GetSSHUsername
	I0919 23:36:37.254583   73436 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21594-14764/.minikube/machines/default-k8s-diff-port-304197/id_rsa Username:docker}
	I0919 23:36:37.263061   73436 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37149
	I0919 23:36:37.263631   73436 main.go:141] libmachine: () Calling .GetVersion
	I0919 23:36:37.264310   73436 main.go:141] libmachine: Using API Version  1
	I0919 23:36:37.264333   73436 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 23:36:37.264709   73436 main.go:141] libmachine: () Calling .GetMachineName
	I0919 23:36:37.264888   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) Calling .GetState
	I0919 23:36:37.267085   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) Calling .DriverName
	I0919 23:36:37.267305   73436 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 23:36:37.267326   73436 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 23:36:37.267345   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) Calling .GetSSHHostname
	I0919 23:36:37.272794   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) DBG | domain default-k8s-diff-port-304197 has defined MAC address 52:54:00:9c:dc:cc in network mk-default-k8s-diff-port-304197
	I0919 23:36:37.273555   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:dc:cc", ip: ""} in network mk-default-k8s-diff-port-304197: {Iface:virbr1 ExpiryTime:2025-09-20 00:36:14 +0000 UTC Type:0 Mac:52:54:00:9c:dc:cc Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:default-k8s-diff-port-304197 Clientid:01:52:54:00:9c:dc:cc}
	I0919 23:36:37.273588   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) DBG | domain default-k8s-diff-port-304197 has defined IP address 192.168.39.80 and MAC address 52:54:00:9c:dc:cc in network mk-default-k8s-diff-port-304197
	I0919 23:36:37.273960   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) Calling .GetSSHPort
	I0919 23:36:37.274233   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) Calling .GetSSHKeyPath
	I0919 23:36:37.274382   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) Calling .GetSSHUsername
	I0919 23:36:37.274543   73436 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21594-14764/.minikube/machines/default-k8s-diff-port-304197/id_rsa Username:docker}
	I0919 23:36:37.528260   73436 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 23:36:37.571831   73436 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-304197" to be "Ready" ...
	I0919 23:36:37.575164   73436 node_ready.go:49] node "default-k8s-diff-port-304197" is "Ready"
	I0919 23:36:37.575195   73436 node_ready.go:38] duration metric: took 3.335681ms for node "default-k8s-diff-port-304197" to be "Ready" ...
	I0919 23:36:37.575213   73436 api_server.go:52] waiting for apiserver process to appear ...
	I0919 23:36:37.575269   73436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 23:36:37.601936   73436 api_server.go:72] duration metric: took 415.188466ms to wait for apiserver process to appear ...
	I0919 23:36:37.601962   73436 api_server.go:88] waiting for apiserver healthz status ...
	I0919 23:36:37.601984   73436 api_server.go:253] Checking apiserver healthz at https://192.168.39.80:8444/healthz ...
	I0919 23:36:37.614609   73436 api_server.go:279] https://192.168.39.80:8444/healthz returned 200:
	ok
	I0919 23:36:37.616271   73436 api_server.go:141] control plane version: v1.34.0
	I0919 23:36:37.616301   73436 api_server.go:131] duration metric: took 14.330865ms to wait for apiserver health ...
	I0919 23:36:37.616313   73436 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 23:36:37.624253   73436 system_pods.go:59] 8 kube-system pods found
	I0919 23:36:37.624283   73436 system_pods.go:61] "coredns-66bc5c9577-qxgj9" [f6340754-da46-4e31-9f54-feec6a797beb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:36:37.624290   73436 system_pods.go:61] "etcd-default-k8s-diff-port-304197" [f55798f9-1fbd-45f9-9428-1814a72e1128] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 23:36:37.624301   73436 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-304197" [98d39265-9747-45f9-a05b-8791e24fba53] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 23:36:37.624313   73436 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-304197" [d9f5b4bd-1110-4db6-9935-0a0645a71b0d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 23:36:37.624317   73436 system_pods.go:61] "kube-proxy-hr2bk" [02b8b6af-3927-4e0c-a567-28aca5e8cd79] Running
	I0919 23:36:37.624322   73436 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-304197" [15d79a55-0413-42e7-ad1e-394df4d34730] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 23:36:37.624326   73436 system_pods.go:61] "metrics-server-746fcd58dc-7rhgt" [64e629d6-5d4b-49e5-ac73-5a67b6f877b4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 23:36:37.624330   73436 system_pods.go:61] "storage-provisioner" [3321717a-b901-415e-b199-977471c0ff1f] Running
	I0919 23:36:37.624335   73436 system_pods.go:74] duration metric: took 8.015678ms to wait for pod list to return data ...
	I0919 23:36:37.624342   73436 default_sa.go:34] waiting for default service account to be created ...
	I0919 23:36:37.632577   73436 default_sa.go:45] found service account: "default"
	I0919 23:36:37.632605   73436 default_sa.go:55] duration metric: took 8.255844ms for default service account to be created ...
	I0919 23:36:37.632619   73436 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 23:36:37.645788   73436 system_pods.go:86] 8 kube-system pods found
	I0919 23:36:37.645893   73436 system_pods.go:89] "coredns-66bc5c9577-qxgj9" [f6340754-da46-4e31-9f54-feec6a797beb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:36:37.645924   73436 system_pods.go:89] "etcd-default-k8s-diff-port-304197" [f55798f9-1fbd-45f9-9428-1814a72e1128] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 23:36:37.645935   73436 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-304197" [98d39265-9747-45f9-a05b-8791e24fba53] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 23:36:37.645945   73436 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-304197" [d9f5b4bd-1110-4db6-9935-0a0645a71b0d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 23:36:37.645951   73436 system_pods.go:89] "kube-proxy-hr2bk" [02b8b6af-3927-4e0c-a567-28aca5e8cd79] Running
	I0919 23:36:37.645960   73436 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-304197" [15d79a55-0413-42e7-ad1e-394df4d34730] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 23:36:37.645967   73436 system_pods.go:89] "metrics-server-746fcd58dc-7rhgt" [64e629d6-5d4b-49e5-ac73-5a67b6f877b4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 23:36:37.645972   73436 system_pods.go:89] "storage-provisioner" [3321717a-b901-415e-b199-977471c0ff1f] Running
	I0919 23:36:37.646021   73436 system_pods.go:126] duration metric: took 13.383133ms to wait for k8s-apps to be running ...
	I0919 23:36:37.646044   73436 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 23:36:37.646114   73436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 23:36:37.677738   73436 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0919 23:36:37.677769   73436 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0919 23:36:37.700765   73436 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0919 23:36:37.700795   73436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0919 23:36:37.713949   73436 system_svc.go:56] duration metric: took 67.894523ms WaitForService to wait for kubelet
	I0919 23:36:37.713984   73436 kubeadm.go:578] duration metric: took 527.238284ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 23:36:37.714008   73436 node_conditions.go:102] verifying NodePressure condition ...
	I0919 23:36:37.722451   73436 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0919 23:36:37.722478   73436 node_conditions.go:123] node cpu capacity is 2
	I0919 23:36:37.722494   73436 node_conditions.go:105] duration metric: took 8.480884ms to run NodePressure ...
	I0919 23:36:37.722507   73436 start.go:241] waiting for startup goroutines ...
	I0919 23:36:37.747187   73436 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0919 23:36:37.747212   73436 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0919 23:36:37.750876   73436 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0919 23:36:37.750902   73436 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0919 23:36:37.787000   73436 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 23:36:37.787029   73436 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0919 23:36:37.788115   73436 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0919 23:36:37.788138   73436 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0919 23:36:37.855895   73436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 23:36:37.859323   73436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 23:36:37.878047   73436 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0919 23:36:37.878079   73436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0919 23:36:37.888244   73436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 23:36:37.978290   73436 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0919 23:36:37.978318   73436 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0919 23:36:38.035279   73436 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0919 23:36:38.035306   73436 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0919 23:36:38.127433   73436 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0919 23:36:38.127462   73436 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0919 23:36:38.241825   73436 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0919 23:36:38.241861   73436 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0919 23:36:38.323194   73436 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0919 23:36:38.323231   73436 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0919 23:36:38.438195   73436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0919 23:36:40.236932   73436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.377572555s)
	I0919 23:36:40.236980   73436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.381049316s)
	I0919 23:36:40.237001   73436 main.go:141] libmachine: Making call to close driver server
	I0919 23:36:40.236988   73436 main.go:141] libmachine: Making call to close driver server
	I0919 23:36:40.237012   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) Calling .Close
	I0919 23:36:40.237015   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) Calling .Close
	I0919 23:36:40.237106   73436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.348776664s)
	I0919 23:36:40.237128   73436 main.go:141] libmachine: Making call to close driver server
	I0919 23:36:40.237140   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) Calling .Close
	I0919 23:36:40.237679   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) DBG | Closing plugin on server side
	I0919 23:36:40.237686   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) DBG | Closing plugin on server side
	I0919 23:36:40.237686   73436 main.go:141] libmachine: Successfully made call to close driver server
	I0919 23:36:40.237707   73436 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 23:36:40.237711   73436 main.go:141] libmachine: Successfully made call to close driver server
	I0919 23:36:40.237717   73436 main.go:141] libmachine: Making call to close driver server
	I0919 23:36:40.237720   73436 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 23:36:40.237749   73436 main.go:141] libmachine: Making call to close driver server
	I0919 23:36:40.237757   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) Calling .Close
	I0919 23:36:40.237738   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) Calling .Close
	I0919 23:36:40.239407   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) DBG | Closing plugin on server side
	I0919 23:36:40.239438   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) DBG | Closing plugin on server side
	I0919 23:36:40.239458   73436 main.go:141] libmachine: Successfully made call to close driver server
	I0919 23:36:40.239467   73436 main.go:141] libmachine: Successfully made call to close driver server
	I0919 23:36:40.239474   73436 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 23:36:40.239476   73436 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 23:36:40.239485   73436 main.go:141] libmachine: Making call to close driver server
	I0919 23:36:40.239488   73436 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-304197"
	I0919 23:36:40.239493   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) Calling .Close
	I0919 23:36:40.239568   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) DBG | Closing plugin on server side
	I0919 23:36:40.239594   73436 main.go:141] libmachine: Successfully made call to close driver server
	I0919 23:36:40.239601   73436 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 23:36:40.239857   73436 main.go:141] libmachine: Successfully made call to close driver server
	I0919 23:36:40.239908   73436 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 23:36:40.286800   73436 main.go:141] libmachine: Making call to close driver server
	I0919 23:36:40.286822   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) Calling .Close
	I0919 23:36:40.287186   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) DBG | Closing plugin on server side
	I0919 23:36:40.287212   73436 main.go:141] libmachine: Successfully made call to close driver server
	I0919 23:36:40.287226   73436 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 23:36:40.563396   73436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.125143236s)
	I0919 23:36:40.563464   73436 main.go:141] libmachine: Making call to close driver server
	I0919 23:36:40.563481   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) Calling .Close
	I0919 23:36:40.563840   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) DBG | Closing plugin on server side
	I0919 23:36:40.563881   73436 main.go:141] libmachine: Successfully made call to close driver server
	I0919 23:36:40.563894   73436 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 23:36:40.563902   73436 main.go:141] libmachine: Making call to close driver server
	I0919 23:36:40.563910   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) Calling .Close
	I0919 23:36:40.564146   73436 main.go:141] libmachine: Successfully made call to close driver server
	I0919 23:36:40.564158   73436 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 23:36:40.565917   73436 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-304197 addons enable metrics-server
	
	I0919 23:36:40.567165   73436 out.go:179] * Enabled addons: metrics-server, storage-provisioner, default-storageclass, dashboard
	I0919 23:36:40.568256   73436 addons.go:514] duration metric: took 3.381487183s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass dashboard]
	I0919 23:36:40.568292   73436 start.go:246] waiting for cluster config update ...
	I0919 23:36:40.568303   73436 start.go:255] writing updated cluster config ...
	I0919 23:36:40.568541   73436 ssh_runner.go:195] Run: rm -f paused
	I0919 23:36:40.579261   73436 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 23:36:40.593680   73436 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-qxgj9" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:36:38.632546   75550 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0919 23:36:38.632722   75550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 23:36:38.632798   75550 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 23:36:38.648127   75550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39877
	I0919 23:36:38.648866   75550 main.go:141] libmachine: () Calling .GetVersion
	I0919 23:36:38.649539   75550 main.go:141] libmachine: Using API Version  1
	I0919 23:36:38.649566   75550 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 23:36:38.649985   75550 main.go:141] libmachine: () Calling .GetMachineName
	I0919 23:36:38.650173   75550 main.go:141] libmachine: (bridge-024908) Calling .GetMachineName
	I0919 23:36:38.650321   75550 main.go:141] libmachine: (bridge-024908) Calling .DriverName
	I0919 23:36:38.650488   75550 start.go:159] libmachine.API.Create for "bridge-024908" (driver="kvm2")
	I0919 23:36:38.650520   75550 client.go:168] LocalClient.Create starting
	I0919 23:36:38.650557   75550 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14764/.minikube/certs/ca.pem
	I0919 23:36:38.650612   75550 main.go:141] libmachine: Decoding PEM data...
	I0919 23:36:38.650632   75550 main.go:141] libmachine: Parsing certificate...
	I0919 23:36:38.650742   75550 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14764/.minikube/certs/cert.pem
	I0919 23:36:38.650773   75550 main.go:141] libmachine: Decoding PEM data...
	I0919 23:36:38.650791   75550 main.go:141] libmachine: Parsing certificate...
	I0919 23:36:38.650813   75550 main.go:141] libmachine: Running pre-create checks...
	I0919 23:36:38.650835   75550 main.go:141] libmachine: (bridge-024908) Calling .PreCreateCheck
	I0919 23:36:38.651375   75550 main.go:141] libmachine: (bridge-024908) Calling .GetConfigRaw
	I0919 23:36:38.651916   75550 main.go:141] libmachine: Creating machine...
	I0919 23:36:38.651933   75550 main.go:141] libmachine: (bridge-024908) Calling .Create
	I0919 23:36:38.652114   75550 main.go:141] libmachine: (bridge-024908) creating domain...
	I0919 23:36:38.652135   75550 main.go:141] libmachine: (bridge-024908) creating network...
	I0919 23:36:38.653894   75550 main.go:141] libmachine: (bridge-024908) DBG | found existing default network
	I0919 23:36:38.654138   75550 main.go:141] libmachine: (bridge-024908) DBG | <network connections='3'>
	I0919 23:36:38.654163   75550 main.go:141] libmachine: (bridge-024908) DBG |   <name>default</name>
	I0919 23:36:38.654269   75550 main.go:141] libmachine: (bridge-024908) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I0919 23:36:38.654292   75550 main.go:141] libmachine: (bridge-024908) DBG |   <forward mode='nat'>
	I0919 23:36:38.654302   75550 main.go:141] libmachine: (bridge-024908) DBG |     <nat>
	I0919 23:36:38.654311   75550 main.go:141] libmachine: (bridge-024908) DBG |       <port start='1024' end='65535'/>
	I0919 23:36:38.654322   75550 main.go:141] libmachine: (bridge-024908) DBG |     </nat>
	I0919 23:36:38.654331   75550 main.go:141] libmachine: (bridge-024908) DBG |   </forward>
	I0919 23:36:38.654341   75550 main.go:141] libmachine: (bridge-024908) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I0919 23:36:38.654350   75550 main.go:141] libmachine: (bridge-024908) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I0919 23:36:38.654360   75550 main.go:141] libmachine: (bridge-024908) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I0919 23:36:38.654369   75550 main.go:141] libmachine: (bridge-024908) DBG |     <dhcp>
	I0919 23:36:38.654379   75550 main.go:141] libmachine: (bridge-024908) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I0919 23:36:38.654390   75550 main.go:141] libmachine: (bridge-024908) DBG |     </dhcp>
	I0919 23:36:38.654400   75550 main.go:141] libmachine: (bridge-024908) DBG |   </ip>
	I0919 23:36:38.654407   75550 main.go:141] libmachine: (bridge-024908) DBG | </network>
	I0919 23:36:38.654419   75550 main.go:141] libmachine: (bridge-024908) DBG | 
	I0919 23:36:38.655330   75550 main.go:141] libmachine: (bridge-024908) DBG | I0919 23:36:38.655113   75578 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:83:fa:c4} reservation:<nil>}
	I0919 23:36:38.656238   75550 main.go:141] libmachine: (bridge-024908) DBG | I0919 23:36:38.656153   75578 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:94:5d:37} reservation:<nil>}
	I0919 23:36:38.657335   75550 main.go:141] libmachine: (bridge-024908) DBG | I0919 23:36:38.657233   75578 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000117a00}
	I0919 23:36:38.657375   75550 main.go:141] libmachine: (bridge-024908) DBG | defining private network:
	I0919 23:36:38.657385   75550 main.go:141] libmachine: (bridge-024908) DBG | 
	I0919 23:36:38.657395   75550 main.go:141] libmachine: (bridge-024908) DBG | <network>
	I0919 23:36:38.657403   75550 main.go:141] libmachine: (bridge-024908) DBG |   <name>mk-bridge-024908</name>
	I0919 23:36:38.657415   75550 main.go:141] libmachine: (bridge-024908) DBG |   <dns enable='no'/>
	I0919 23:36:38.657425   75550 main.go:141] libmachine: (bridge-024908) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0919 23:36:38.657450   75550 main.go:141] libmachine: (bridge-024908) DBG |     <dhcp>
	I0919 23:36:38.657469   75550 main.go:141] libmachine: (bridge-024908) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0919 23:36:38.657482   75550 main.go:141] libmachine: (bridge-024908) DBG |     </dhcp>
	I0919 23:36:38.657492   75550 main.go:141] libmachine: (bridge-024908) DBG |   </ip>
	I0919 23:36:38.657499   75550 main.go:141] libmachine: (bridge-024908) DBG | </network>
	I0919 23:36:38.657504   75550 main.go:141] libmachine: (bridge-024908) DBG | 
	I0919 23:36:38.664316   75550 main.go:141] libmachine: (bridge-024908) DBG | creating private network mk-bridge-024908 192.168.61.0/24...
	I0919 23:36:38.762441   75550 main.go:141] libmachine: (bridge-024908) DBG | private network mk-bridge-024908 192.168.61.0/24 created
	I0919 23:36:38.762844   75550 main.go:141] libmachine: (bridge-024908) DBG | <network>
	I0919 23:36:38.762865   75550 main.go:141] libmachine: (bridge-024908) DBG |   <name>mk-bridge-024908</name>
	I0919 23:36:38.762876   75550 main.go:141] libmachine: (bridge-024908) setting up store path in /home/jenkins/minikube-integration/21594-14764/.minikube/machines/bridge-024908 ...
	I0919 23:36:38.762919   75550 main.go:141] libmachine: (bridge-024908) building disk image from file:///home/jenkins/minikube-integration/21594-14764/.minikube/cache/iso/amd64/minikube-v1.37.0-amd64.iso
	I0919 23:36:38.762962   75550 main.go:141] libmachine: (bridge-024908) DBG |   <uuid>95a13fb5-b512-4326-8477-f3c0bf269579</uuid>
	I0919 23:36:38.762979   75550 main.go:141] libmachine: (bridge-024908) DBG |   <bridge name='virbr3' stp='on' delay='0'/>
	I0919 23:36:38.762993   75550 main.go:141] libmachine: (bridge-024908) DBG |   <mac address='52:54:00:15:e2:8d'/>
	I0919 23:36:38.763017   75550 main.go:141] libmachine: (bridge-024908) Downloading /home/jenkins/minikube-integration/21594-14764/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21594-14764/.minikube/cache/iso/amd64/minikube-v1.37.0-amd64.iso...
	I0919 23:36:38.763037   75550 main.go:141] libmachine: (bridge-024908) DBG |   <dns enable='no'/>
	I0919 23:36:38.763070   75550 main.go:141] libmachine: (bridge-024908) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0919 23:36:38.763082   75550 main.go:141] libmachine: (bridge-024908) DBG |     <dhcp>
	I0919 23:36:38.763092   75550 main.go:141] libmachine: (bridge-024908) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0919 23:36:38.763106   75550 main.go:141] libmachine: (bridge-024908) DBG |     </dhcp>
	I0919 23:36:38.763118   75550 main.go:141] libmachine: (bridge-024908) DBG |   </ip>
	I0919 23:36:38.763127   75550 main.go:141] libmachine: (bridge-024908) DBG | </network>
	I0919 23:36:38.763138   75550 main.go:141] libmachine: (bridge-024908) DBG | 
	I0919 23:36:38.763150   75550 main.go:141] libmachine: (bridge-024908) DBG | I0919 23:36:38.762795   75578 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21594-14764/.minikube
	I0919 23:36:39.053562   75550 main.go:141] libmachine: (bridge-024908) DBG | I0919 23:36:39.053398   75578 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21594-14764/.minikube/machines/bridge-024908/id_rsa...
	I0919 23:36:39.390093   75550 main.go:141] libmachine: (bridge-024908) DBG | I0919 23:36:39.389883   75578 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21594-14764/.minikube/machines/bridge-024908/bridge-024908.rawdisk...
	I0919 23:36:39.390129   75550 main.go:141] libmachine: (bridge-024908) DBG | Writing magic tar header
	I0919 23:36:39.390176   75550 main.go:141] libmachine: (bridge-024908) DBG | Writing SSH key tar header
	I0919 23:36:39.390207   75550 main.go:141] libmachine: (bridge-024908) DBG | I0919 23:36:39.390069   75578 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21594-14764/.minikube/machines/bridge-024908 ...
	I0919 23:36:39.390228   75550 main.go:141] libmachine: (bridge-024908) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21594-14764/.minikube/machines/bridge-024908
	I0919 23:36:39.390244   75550 main.go:141] libmachine: (bridge-024908) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21594-14764/.minikube/machines
	I0919 23:36:39.390259   75550 main.go:141] libmachine: (bridge-024908) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21594-14764/.minikube
	I0919 23:36:39.390311   75550 main.go:141] libmachine: (bridge-024908) setting executable bit set on /home/jenkins/minikube-integration/21594-14764/.minikube/machines/bridge-024908 (perms=drwx------)
	I0919 23:36:39.390327   75550 main.go:141] libmachine: (bridge-024908) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21594-14764
	I0919 23:36:39.390337   75550 main.go:141] libmachine: (bridge-024908) setting executable bit set on /home/jenkins/minikube-integration/21594-14764/.minikube/machines (perms=drwxr-xr-x)
	I0919 23:36:39.390347   75550 main.go:141] libmachine: (bridge-024908) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0919 23:36:39.390357   75550 main.go:141] libmachine: (bridge-024908) DBG | checking permissions on dir: /home/jenkins
	I0919 23:36:39.390370   75550 main.go:141] libmachine: (bridge-024908) DBG | checking permissions on dir: /home
	I0919 23:36:39.390381   75550 main.go:141] libmachine: (bridge-024908) DBG | skipping /home - not owner
	I0919 23:36:39.390399   75550 main.go:141] libmachine: (bridge-024908) setting executable bit set on /home/jenkins/minikube-integration/21594-14764/.minikube (perms=drwxr-xr-x)
	I0919 23:36:39.390414   75550 main.go:141] libmachine: (bridge-024908) setting executable bit set on /home/jenkins/minikube-integration/21594-14764 (perms=drwxrwxr-x)
	I0919 23:36:39.390429   75550 main.go:141] libmachine: (bridge-024908) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0919 23:36:39.390437   75550 main.go:141] libmachine: (bridge-024908) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0919 23:36:39.390480   75550 main.go:141] libmachine: (bridge-024908) defining domain...
	I0919 23:36:39.392184   75550 main.go:141] libmachine: (bridge-024908) defining domain using XML: 
	I0919 23:36:39.392209   75550 main.go:141] libmachine: (bridge-024908) <domain type='kvm'>
	I0919 23:36:39.392220   75550 main.go:141] libmachine: (bridge-024908)   <name>bridge-024908</name>
	I0919 23:36:39.392227   75550 main.go:141] libmachine: (bridge-024908)   <memory unit='MiB'>3072</memory>
	I0919 23:36:39.392235   75550 main.go:141] libmachine: (bridge-024908)   <vcpu>2</vcpu>
	I0919 23:36:39.392242   75550 main.go:141] libmachine: (bridge-024908)   <features>
	I0919 23:36:39.392251   75550 main.go:141] libmachine: (bridge-024908)     <acpi/>
	I0919 23:36:39.392264   75550 main.go:141] libmachine: (bridge-024908)     <apic/>
	I0919 23:36:39.392287   75550 main.go:141] libmachine: (bridge-024908)     <pae/>
	I0919 23:36:39.392295   75550 main.go:141] libmachine: (bridge-024908)   </features>
	I0919 23:36:39.392311   75550 main.go:141] libmachine: (bridge-024908)   <cpu mode='host-passthrough'>
	I0919 23:36:39.392317   75550 main.go:141] libmachine: (bridge-024908)   </cpu>
	I0919 23:36:39.392325   75550 main.go:141] libmachine: (bridge-024908)   <os>
	I0919 23:36:39.392331   75550 main.go:141] libmachine: (bridge-024908)     <type>hvm</type>
	I0919 23:36:39.392338   75550 main.go:141] libmachine: (bridge-024908)     <boot dev='cdrom'/>
	I0919 23:36:39.392344   75550 main.go:141] libmachine: (bridge-024908)     <boot dev='hd'/>
	I0919 23:36:39.392352   75550 main.go:141] libmachine: (bridge-024908)     <bootmenu enable='no'/>
	I0919 23:36:39.392358   75550 main.go:141] libmachine: (bridge-024908)   </os>
	I0919 23:36:39.392366   75550 main.go:141] libmachine: (bridge-024908)   <devices>
	I0919 23:36:39.392373   75550 main.go:141] libmachine: (bridge-024908)     <disk type='file' device='cdrom'>
	I0919 23:36:39.392386   75550 main.go:141] libmachine: (bridge-024908)       <source file='/home/jenkins/minikube-integration/21594-14764/.minikube/machines/bridge-024908/boot2docker.iso'/>
	I0919 23:36:39.392393   75550 main.go:141] libmachine: (bridge-024908)       <target dev='hdc' bus='scsi'/>
	I0919 23:36:39.392401   75550 main.go:141] libmachine: (bridge-024908)       <readonly/>
	I0919 23:36:39.392421   75550 main.go:141] libmachine: (bridge-024908)     </disk>
	I0919 23:36:39.392431   75550 main.go:141] libmachine: (bridge-024908)     <disk type='file' device='disk'>
	I0919 23:36:39.392440   75550 main.go:141] libmachine: (bridge-024908)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0919 23:36:39.392453   75550 main.go:141] libmachine: (bridge-024908)       <source file='/home/jenkins/minikube-integration/21594-14764/.minikube/machines/bridge-024908/bridge-024908.rawdisk'/>
	I0919 23:36:39.392460   75550 main.go:141] libmachine: (bridge-024908)       <target dev='hda' bus='virtio'/>
	I0919 23:36:39.392468   75550 main.go:141] libmachine: (bridge-024908)     </disk>
	I0919 23:36:39.392474   75550 main.go:141] libmachine: (bridge-024908)     <interface type='network'>
	I0919 23:36:39.392484   75550 main.go:141] libmachine: (bridge-024908)       <source network='mk-bridge-024908'/>
	I0919 23:36:39.392490   75550 main.go:141] libmachine: (bridge-024908)       <model type='virtio'/>
	I0919 23:36:39.392498   75550 main.go:141] libmachine: (bridge-024908)     </interface>
	I0919 23:36:39.392504   75550 main.go:141] libmachine: (bridge-024908)     <interface type='network'>
	I0919 23:36:39.392513   75550 main.go:141] libmachine: (bridge-024908)       <source network='default'/>
	I0919 23:36:39.392519   75550 main.go:141] libmachine: (bridge-024908)       <model type='virtio'/>
	I0919 23:36:39.392527   75550 main.go:141] libmachine: (bridge-024908)     </interface>
	I0919 23:36:39.392533   75550 main.go:141] libmachine: (bridge-024908)     <serial type='pty'>
	I0919 23:36:39.392541   75550 main.go:141] libmachine: (bridge-024908)       <target port='0'/>
	I0919 23:36:39.392547   75550 main.go:141] libmachine: (bridge-024908)     </serial>
	I0919 23:36:39.392555   75550 main.go:141] libmachine: (bridge-024908)     <console type='pty'>
	I0919 23:36:39.392572   75550 main.go:141] libmachine: (bridge-024908)       <target type='serial' port='0'/>
	I0919 23:36:39.392580   75550 main.go:141] libmachine: (bridge-024908)     </console>
	I0919 23:36:39.392586   75550 main.go:141] libmachine: (bridge-024908)     <rng model='virtio'>
	I0919 23:36:39.392596   75550 main.go:141] libmachine: (bridge-024908)       <backend model='random'>/dev/random</backend>
	I0919 23:36:39.392603   75550 main.go:141] libmachine: (bridge-024908)     </rng>
	I0919 23:36:39.392611   75550 main.go:141] libmachine: (bridge-024908)   </devices>
	I0919 23:36:39.392623   75550 main.go:141] libmachine: (bridge-024908) </domain>
	I0919 23:36:39.392632   75550 main.go:141] libmachine: (bridge-024908) 
	I0919 23:36:39.398997   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined MAC address 52:54:00:c9:06:0c in network default
	I0919 23:36:39.399812   75550 main.go:141] libmachine: (bridge-024908) starting domain...
	I0919 23:36:39.399829   75550 main.go:141] libmachine: (bridge-024908) ensuring networks are active...
	I0919 23:36:39.399847   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:36:39.400874   75550 main.go:141] libmachine: (bridge-024908) Ensuring network default is active
	I0919 23:36:39.401267   75550 main.go:141] libmachine: (bridge-024908) Ensuring network mk-bridge-024908 is active
	I0919 23:36:39.402187   75550 main.go:141] libmachine: (bridge-024908) getting domain XML...
	I0919 23:36:39.403617   75550 main.go:141] libmachine: (bridge-024908) DBG | starting domain XML:
	I0919 23:36:39.403635   75550 main.go:141] libmachine: (bridge-024908) DBG | <domain type='kvm'>
	I0919 23:36:39.403644   75550 main.go:141] libmachine: (bridge-024908) DBG |   <name>bridge-024908</name>
	I0919 23:36:39.403654   75550 main.go:141] libmachine: (bridge-024908) DBG |   <uuid>edde9a5b-670d-4ac4-972d-a0f3dbabce20</uuid>
	I0919 23:36:39.403665   75550 main.go:141] libmachine: (bridge-024908) DBG |   <memory unit='KiB'>3145728</memory>
	I0919 23:36:39.403675   75550 main.go:141] libmachine: (bridge-024908) DBG |   <currentMemory unit='KiB'>3145728</currentMemory>
	I0919 23:36:39.403692   75550 main.go:141] libmachine: (bridge-024908) DBG |   <vcpu placement='static'>2</vcpu>
	I0919 23:36:39.403698   75550 main.go:141] libmachine: (bridge-024908) DBG |   <os>
	I0919 23:36:39.403708   75550 main.go:141] libmachine: (bridge-024908) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I0919 23:36:39.403714   75550 main.go:141] libmachine: (bridge-024908) DBG |     <boot dev='cdrom'/>
	I0919 23:36:39.403823   75550 main.go:141] libmachine: (bridge-024908) DBG |     <boot dev='hd'/>
	I0919 23:36:39.403871   75550 main.go:141] libmachine: (bridge-024908) DBG |     <bootmenu enable='no'/>
	I0919 23:36:39.403885   75550 main.go:141] libmachine: (bridge-024908) DBG |   </os>
	I0919 23:36:39.403892   75550 main.go:141] libmachine: (bridge-024908) DBG |   <features>
	I0919 23:36:39.403901   75550 main.go:141] libmachine: (bridge-024908) DBG |     <acpi/>
	I0919 23:36:39.403907   75550 main.go:141] libmachine: (bridge-024908) DBG |     <apic/>
	I0919 23:36:39.403915   75550 main.go:141] libmachine: (bridge-024908) DBG |     <pae/>
	I0919 23:36:39.403936   75550 main.go:141] libmachine: (bridge-024908) DBG |   </features>
	I0919 23:36:39.403962   75550 main.go:141] libmachine: (bridge-024908) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I0919 23:36:39.403976   75550 main.go:141] libmachine: (bridge-024908) DBG |   <clock offset='utc'/>
	I0919 23:36:39.403990   75550 main.go:141] libmachine: (bridge-024908) DBG |   <on_poweroff>destroy</on_poweroff>
	I0919 23:36:39.403998   75550 main.go:141] libmachine: (bridge-024908) DBG |   <on_reboot>restart</on_reboot>
	I0919 23:36:39.404012   75550 main.go:141] libmachine: (bridge-024908) DBG |   <on_crash>destroy</on_crash>
	I0919 23:36:39.404020   75550 main.go:141] libmachine: (bridge-024908) DBG |   <devices>
	I0919 23:36:39.404030   75550 main.go:141] libmachine: (bridge-024908) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I0919 23:36:39.404061   75550 main.go:141] libmachine: (bridge-024908) DBG |     <disk type='file' device='cdrom'>
	I0919 23:36:39.404107   75550 main.go:141] libmachine: (bridge-024908) DBG |       <driver name='qemu' type='raw'/>
	I0919 23:36:39.404141   75550 main.go:141] libmachine: (bridge-024908) DBG |       <source file='/home/jenkins/minikube-integration/21594-14764/.minikube/machines/bridge-024908/boot2docker.iso'/>
	I0919 23:36:39.404155   75550 main.go:141] libmachine: (bridge-024908) DBG |       <target dev='hdc' bus='scsi'/>
	I0919 23:36:39.404162   75550 main.go:141] libmachine: (bridge-024908) DBG |       <readonly/>
	I0919 23:36:39.404188   75550 main.go:141] libmachine: (bridge-024908) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I0919 23:36:39.404198   75550 main.go:141] libmachine: (bridge-024908) DBG |     </disk>
	I0919 23:36:39.404216   75550 main.go:141] libmachine: (bridge-024908) DBG |     <disk type='file' device='disk'>
	I0919 23:36:39.404224   75550 main.go:141] libmachine: (bridge-024908) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I0919 23:36:39.404238   75550 main.go:141] libmachine: (bridge-024908) DBG |       <source file='/home/jenkins/minikube-integration/21594-14764/.minikube/machines/bridge-024908/bridge-024908.rawdisk'/>
	I0919 23:36:39.404246   75550 main.go:141] libmachine: (bridge-024908) DBG |       <target dev='hda' bus='virtio'/>
	I0919 23:36:39.404257   75550 main.go:141] libmachine: (bridge-024908) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I0919 23:36:39.404263   75550 main.go:141] libmachine: (bridge-024908) DBG |     </disk>
	I0919 23:36:39.404273   75550 main.go:141] libmachine: (bridge-024908) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I0919 23:36:39.404283   75550 main.go:141] libmachine: (bridge-024908) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I0919 23:36:39.404292   75550 main.go:141] libmachine: (bridge-024908) DBG |     </controller>
	I0919 23:36:39.404301   75550 main.go:141] libmachine: (bridge-024908) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I0919 23:36:39.404310   75550 main.go:141] libmachine: (bridge-024908) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I0919 23:36:39.404320   75550 main.go:141] libmachine: (bridge-024908) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I0919 23:36:39.404328   75550 main.go:141] libmachine: (bridge-024908) DBG |     </controller>
	I0919 23:36:39.404336   75550 main.go:141] libmachine: (bridge-024908) DBG |     <interface type='network'>
	I0919 23:36:39.404344   75550 main.go:141] libmachine: (bridge-024908) DBG |       <mac address='52:54:00:6c:4a:f7'/>
	I0919 23:36:39.404352   75550 main.go:141] libmachine: (bridge-024908) DBG |       <source network='mk-bridge-024908'/>
	I0919 23:36:39.404369   75550 main.go:141] libmachine: (bridge-024908) DBG |       <model type='virtio'/>
	I0919 23:36:39.404379   75550 main.go:141] libmachine: (bridge-024908) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I0919 23:36:39.404387   75550 main.go:141] libmachine: (bridge-024908) DBG |     </interface>
	I0919 23:36:39.404394   75550 main.go:141] libmachine: (bridge-024908) DBG |     <interface type='network'>
	I0919 23:36:39.404403   75550 main.go:141] libmachine: (bridge-024908) DBG |       <mac address='52:54:00:c9:06:0c'/>
	I0919 23:36:39.404410   75550 main.go:141] libmachine: (bridge-024908) DBG |       <source network='default'/>
	I0919 23:36:39.404418   75550 main.go:141] libmachine: (bridge-024908) DBG |       <model type='virtio'/>
	I0919 23:36:39.404428   75550 main.go:141] libmachine: (bridge-024908) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I0919 23:36:39.404435   75550 main.go:141] libmachine: (bridge-024908) DBG |     </interface>
	I0919 23:36:39.404445   75550 main.go:141] libmachine: (bridge-024908) DBG |     <serial type='pty'>
	I0919 23:36:39.404454   75550 main.go:141] libmachine: (bridge-024908) DBG |       <target type='isa-serial' port='0'>
	I0919 23:36:39.404461   75550 main.go:141] libmachine: (bridge-024908) DBG |         <model name='isa-serial'/>
	I0919 23:36:39.404469   75550 main.go:141] libmachine: (bridge-024908) DBG |       </target>
	I0919 23:36:39.404475   75550 main.go:141] libmachine: (bridge-024908) DBG |     </serial>
	I0919 23:36:39.404483   75550 main.go:141] libmachine: (bridge-024908) DBG |     <console type='pty'>
	I0919 23:36:39.404490   75550 main.go:141] libmachine: (bridge-024908) DBG |       <target type='serial' port='0'/>
	I0919 23:36:39.404498   75550 main.go:141] libmachine: (bridge-024908) DBG |     </console>
	I0919 23:36:39.404506   75550 main.go:141] libmachine: (bridge-024908) DBG |     <input type='mouse' bus='ps2'/>
	I0919 23:36:39.404515   75550 main.go:141] libmachine: (bridge-024908) DBG |     <input type='keyboard' bus='ps2'/>
	I0919 23:36:39.404522   75550 main.go:141] libmachine: (bridge-024908) DBG |     <audio id='1' type='none'/>
	I0919 23:36:39.404530   75550 main.go:141] libmachine: (bridge-024908) DBG |     <memballoon model='virtio'>
	I0919 23:36:39.404539   75550 main.go:141] libmachine: (bridge-024908) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I0919 23:36:39.404547   75550 main.go:141] libmachine: (bridge-024908) DBG |     </memballoon>
	I0919 23:36:39.404553   75550 main.go:141] libmachine: (bridge-024908) DBG |     <rng model='virtio'>
	I0919 23:36:39.404562   75550 main.go:141] libmachine: (bridge-024908) DBG |       <backend model='random'>/dev/random</backend>
	I0919 23:36:39.404571   75550 main.go:141] libmachine: (bridge-024908) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I0919 23:36:39.404582   75550 main.go:141] libmachine: (bridge-024908) DBG |     </rng>
	I0919 23:36:39.404588   75550 main.go:141] libmachine: (bridge-024908) DBG |   </devices>
	I0919 23:36:39.404596   75550 main.go:141] libmachine: (bridge-024908) DBG | </domain>
	I0919 23:36:39.404602   75550 main.go:141] libmachine: (bridge-024908) DBG | 
	I0919 23:36:41.141041   75550 main.go:141] libmachine: (bridge-024908) waiting for domain to start...
	I0919 23:36:41.142627   75550 main.go:141] libmachine: (bridge-024908) domain is now running
	I0919 23:36:41.142650   75550 main.go:141] libmachine: (bridge-024908) waiting for IP...
	I0919 23:36:41.143576   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:36:41.144321   75550 main.go:141] libmachine: (bridge-024908) DBG | no network interface addresses found for domain bridge-024908 (source=lease)
	I0919 23:36:41.144368   75550 main.go:141] libmachine: (bridge-024908) DBG | trying to list again with source=arp
	I0919 23:36:41.147405   75550 main.go:141] libmachine: (bridge-024908) DBG | unable to find current IP address of domain bridge-024908 in network mk-bridge-024908 (interfaces detected: [])
	I0919 23:36:41.147508   75550 main.go:141] libmachine: (bridge-024908) DBG | I0919 23:36:41.147406   75578 retry.go:31] will retry after 302.394896ms: waiting for domain to come up
	I0919 23:36:41.452391   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:36:41.453227   75550 main.go:141] libmachine: (bridge-024908) DBG | no network interface addresses found for domain bridge-024908 (source=lease)
	I0919 23:36:41.453267   75550 main.go:141] libmachine: (bridge-024908) DBG | trying to list again with source=arp
	I0919 23:36:41.453782   75550 main.go:141] libmachine: (bridge-024908) DBG | unable to find current IP address of domain bridge-024908 in network mk-bridge-024908 (interfaces detected: [])
	I0919 23:36:41.453813   75550 main.go:141] libmachine: (bridge-024908) DBG | I0919 23:36:41.453668   75578 retry.go:31] will retry after 284.249946ms: waiting for domain to come up
	I0919 23:36:41.740563   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:36:41.741701   75550 main.go:141] libmachine: (bridge-024908) DBG | no network interface addresses found for domain bridge-024908 (source=lease)
	I0919 23:36:41.741977   75550 main.go:141] libmachine: (bridge-024908) DBG | trying to list again with source=arp
	I0919 23:36:41.742448   75550 main.go:141] libmachine: (bridge-024908) DBG | unable to find current IP address of domain bridge-024908 in network mk-bridge-024908 (interfaces detected: [])
	I0919 23:36:41.742684   75550 main.go:141] libmachine: (bridge-024908) DBG | I0919 23:36:41.742535   75578 retry.go:31] will retry after 320.73485ms: waiting for domain to come up
	I0919 23:36:42.065165   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:36:42.066132   75550 main.go:141] libmachine: (bridge-024908) DBG | no network interface addresses found for domain bridge-024908 (source=lease)
	I0919 23:36:42.066156   75550 main.go:141] libmachine: (bridge-024908) DBG | trying to list again with source=arp
	I0919 23:36:42.066643   75550 main.go:141] libmachine: (bridge-024908) DBG | unable to find current IP address of domain bridge-024908 in network mk-bridge-024908 (interfaces detected: [])
	I0919 23:36:42.066683   75550 main.go:141] libmachine: (bridge-024908) DBG | I0919 23:36:42.066620   75578 retry.go:31] will retry after 403.91255ms: waiting for domain to come up
	I0919 23:36:42.472445   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:36:42.473224   75550 main.go:141] libmachine: (bridge-024908) DBG | no network interface addresses found for domain bridge-024908 (source=lease)
	I0919 23:36:42.473274   75550 main.go:141] libmachine: (bridge-024908) DBG | trying to list again with source=arp
	I0919 23:36:42.473707   75550 main.go:141] libmachine: (bridge-024908) DBG | unable to find current IP address of domain bridge-024908 in network mk-bridge-024908 (interfaces detected: [])
	I0919 23:36:42.473759   75550 main.go:141] libmachine: (bridge-024908) DBG | I0919 23:36:42.473659   75578 retry.go:31] will retry after 562.979109ms: waiting for domain to come up
	I0919 23:36:43.038837   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:36:43.039974   75550 main.go:141] libmachine: (bridge-024908) DBG | no network interface addresses found for domain bridge-024908 (source=lease)
	I0919 23:36:43.040083   75550 main.go:141] libmachine: (bridge-024908) DBG | trying to list again with source=arp
	I0919 23:36:43.040529   75550 main.go:141] libmachine: (bridge-024908) DBG | unable to find current IP address of domain bridge-024908 in network mk-bridge-024908 (interfaces detected: [])
	I0919 23:36:43.040566   75550 main.go:141] libmachine: (bridge-024908) DBG | I0919 23:36:43.040524   75578 retry.go:31] will retry after 888.081744ms: waiting for domain to come up
	I0919 23:36:40.122373   73258 system_pods.go:86] 7 kube-system pods found
	I0919 23:36:40.122419   73258 system_pods.go:89] "coredns-66bc5c9577-6ff4s" [bfeeb5e4-29d3-4cac-a8d8-f50cea911379] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:36:40.122428   73258 system_pods.go:89] "etcd-flannel-024908" [750b89e8-a3d5-4298-8de2-0f8813e3f04f] Running
	I0919 23:36:40.122438   73258 system_pods.go:89] "kube-apiserver-flannel-024908" [6fd5e3be-9011-4355-ac6e-edce6a72430c] Running
	I0919 23:36:40.122444   73258 system_pods.go:89] "kube-controller-manager-flannel-024908" [991db4b7-f11c-492c-bb87-2291620f98c3] Running
	I0919 23:36:40.122453   73258 system_pods.go:89] "kube-proxy-5ch96" [1e8a306d-3338-4763-a77f-02cdca9beefa] Running
	I0919 23:36:40.122458   73258 system_pods.go:89] "kube-scheduler-flannel-024908" [4bde65b7-9774-486e-b36b-fd4f0f4ad913] Running
	I0919 23:36:40.122464   73258 system_pods.go:89] "storage-provisioner" [8e0b0c57-dc02-42f5-a76e-71f66be07956] Running
	I0919 23:36:40.122482   73258 retry.go:31] will retry after 1.467659498s: missing components: kube-dns
	I0919 23:36:41.596933   73258 system_pods.go:86] 7 kube-system pods found
	I0919 23:36:41.596978   73258 system_pods.go:89] "coredns-66bc5c9577-6ff4s" [bfeeb5e4-29d3-4cac-a8d8-f50cea911379] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:36:41.596987   73258 system_pods.go:89] "etcd-flannel-024908" [750b89e8-a3d5-4298-8de2-0f8813e3f04f] Running
	I0919 23:36:41.596996   73258 system_pods.go:89] "kube-apiserver-flannel-024908" [6fd5e3be-9011-4355-ac6e-edce6a72430c] Running
	I0919 23:36:41.597003   73258 system_pods.go:89] "kube-controller-manager-flannel-024908" [991db4b7-f11c-492c-bb87-2291620f98c3] Running
	I0919 23:36:41.597008   73258 system_pods.go:89] "kube-proxy-5ch96" [1e8a306d-3338-4763-a77f-02cdca9beefa] Running
	I0919 23:36:41.597014   73258 system_pods.go:89] "kube-scheduler-flannel-024908" [4bde65b7-9774-486e-b36b-fd4f0f4ad913] Running
	I0919 23:36:41.597020   73258 system_pods.go:89] "storage-provisioner" [8e0b0c57-dc02-42f5-a76e-71f66be07956] Running
	I0919 23:36:41.597039   73258 retry.go:31] will retry after 1.787145551s: missing components: kube-dns
	I0919 23:36:43.391350   73258 system_pods.go:86] 7 kube-system pods found
	I0919 23:36:43.391395   73258 system_pods.go:89] "coredns-66bc5c9577-6ff4s" [bfeeb5e4-29d3-4cac-a8d8-f50cea911379] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:36:43.391406   73258 system_pods.go:89] "etcd-flannel-024908" [750b89e8-a3d5-4298-8de2-0f8813e3f04f] Running
	I0919 23:36:43.391414   73258 system_pods.go:89] "kube-apiserver-flannel-024908" [6fd5e3be-9011-4355-ac6e-edce6a72430c] Running
	I0919 23:36:43.391420   73258 system_pods.go:89] "kube-controller-manager-flannel-024908" [991db4b7-f11c-492c-bb87-2291620f98c3] Running
	I0919 23:36:43.391427   73258 system_pods.go:89] "kube-proxy-5ch96" [1e8a306d-3338-4763-a77f-02cdca9beefa] Running
	I0919 23:36:43.391432   73258 system_pods.go:89] "kube-scheduler-flannel-024908" [4bde65b7-9774-486e-b36b-fd4f0f4ad913] Running
	I0919 23:36:43.391437   73258 system_pods.go:89] "storage-provisioner" [8e0b0c57-dc02-42f5-a76e-71f66be07956] Running
	I0919 23:36:43.391457   73258 retry.go:31] will retry after 2.32094539s: missing components: kube-dns
	W0919 23:36:42.610833   73436 pod_ready.go:104] pod "coredns-66bc5c9577-qxgj9" is not "Ready", error: <nil>
	I0919 23:36:43.605311   73436 pod_ready.go:94] pod "coredns-66bc5c9577-qxgj9" is "Ready"
	I0919 23:36:43.605355   73436 pod_ready.go:86] duration metric: took 3.0116459s for pod "coredns-66bc5c9577-qxgj9" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:36:43.610579   73436 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-304197" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:36:45.119780   73436 pod_ready.go:94] pod "etcd-default-k8s-diff-port-304197" is "Ready"
	I0919 23:36:45.119810   73436 pod_ready.go:86] duration metric: took 1.509196741s for pod "etcd-default-k8s-diff-port-304197" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:36:45.123556   73436 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-304197" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:36:45.135451   73436 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-304197" is "Ready"
	I0919 23:36:45.135483   73436 pod_ready.go:86] duration metric: took 11.885835ms for pod "kube-apiserver-default-k8s-diff-port-304197" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:36:45.141571   73436 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-304197" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:36:45.148179   73436 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-304197" is "Ready"
	I0919 23:36:45.148214   73436 pod_ready.go:86] duration metric: took 6.609569ms for pod "kube-controller-manager-default-k8s-diff-port-304197" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:36:45.199272   73436 pod_ready.go:83] waiting for pod "kube-proxy-hr2bk" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:36:45.602963   73436 pod_ready.go:94] pod "kube-proxy-hr2bk" is "Ready"
	I0919 23:36:45.602991   73436 pod_ready.go:86] duration metric: took 403.695732ms for pod "kube-proxy-hr2bk" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:36:45.799122   73436 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-304197" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:36:46.199651   73436 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-304197" is "Ready"
	I0919 23:36:46.199686   73436 pod_ready.go:86] duration metric: took 400.535111ms for pod "kube-scheduler-default-k8s-diff-port-304197" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:36:46.199701   73436 pod_ready.go:40] duration metric: took 5.620390024s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 23:36:46.249045   73436 start.go:617] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0919 23:36:46.250918   73436 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-304197" cluster and "default" namespace by default
	I0919 23:36:43.929924   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:36:43.930760   75550 main.go:141] libmachine: (bridge-024908) DBG | no network interface addresses found for domain bridge-024908 (source=lease)
	I0919 23:36:43.930790   75550 main.go:141] libmachine: (bridge-024908) DBG | trying to list again with source=arp
	I0919 23:36:43.931162   75550 main.go:141] libmachine: (bridge-024908) DBG | unable to find current IP address of domain bridge-024908 in network mk-bridge-024908 (interfaces detected: [])
	I0919 23:36:43.931184   75550 main.go:141] libmachine: (bridge-024908) DBG | I0919 23:36:43.931148   75578 retry.go:31] will retry after 1.15149481s: waiting for domain to come up
	I0919 23:36:45.084216   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:36:45.085010   75550 main.go:141] libmachine: (bridge-024908) DBG | no network interface addresses found for domain bridge-024908 (source=lease)
	I0919 23:36:45.085040   75550 main.go:141] libmachine: (bridge-024908) DBG | trying to list again with source=arp
	I0919 23:36:45.085380   75550 main.go:141] libmachine: (bridge-024908) DBG | unable to find current IP address of domain bridge-024908 in network mk-bridge-024908 (interfaces detected: [])
	I0919 23:36:45.085401   75550 main.go:141] libmachine: (bridge-024908) DBG | I0919 23:36:45.085359   75578 retry.go:31] will retry after 1.310420989s: waiting for domain to come up
	I0919 23:36:46.399094   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:36:46.399967   75550 main.go:141] libmachine: (bridge-024908) DBG | no network interface addresses found for domain bridge-024908 (source=lease)
	I0919 23:36:46.399985   75550 main.go:141] libmachine: (bridge-024908) DBG | trying to list again with source=arp
	I0919 23:36:46.400409   75550 main.go:141] libmachine: (bridge-024908) DBG | unable to find current IP address of domain bridge-024908 in network mk-bridge-024908 (interfaces detected: [])
	I0919 23:36:46.400460   75550 main.go:141] libmachine: (bridge-024908) DBG | I0919 23:36:46.400397   75578 retry.go:31] will retry after 1.537684727s: waiting for domain to come up
	I0919 23:36:47.939978   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:36:47.940713   75550 main.go:141] libmachine: (bridge-024908) DBG | no network interface addresses found for domain bridge-024908 (source=lease)
	I0919 23:36:47.940746   75550 main.go:141] libmachine: (bridge-024908) DBG | trying to list again with source=arp
	I0919 23:36:47.941190   75550 main.go:141] libmachine: (bridge-024908) DBG | unable to find current IP address of domain bridge-024908 in network mk-bridge-024908 (interfaces detected: [])
	I0919 23:36:47.941246   75550 main.go:141] libmachine: (bridge-024908) DBG | I0919 23:36:47.941179   75578 retry.go:31] will retry after 2.173582548s: waiting for domain to come up
	I0919 23:36:45.718271   73258 system_pods.go:86] 7 kube-system pods found
	I0919 23:36:45.718306   73258 system_pods.go:89] "coredns-66bc5c9577-6ff4s" [bfeeb5e4-29d3-4cac-a8d8-f50cea911379] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:36:45.718315   73258 system_pods.go:89] "etcd-flannel-024908" [750b89e8-a3d5-4298-8de2-0f8813e3f04f] Running
	I0919 23:36:45.718323   73258 system_pods.go:89] "kube-apiserver-flannel-024908" [6fd5e3be-9011-4355-ac6e-edce6a72430c] Running
	I0919 23:36:45.718329   73258 system_pods.go:89] "kube-controller-manager-flannel-024908" [991db4b7-f11c-492c-bb87-2291620f98c3] Running
	I0919 23:36:45.718334   73258 system_pods.go:89] "kube-proxy-5ch96" [1e8a306d-3338-4763-a77f-02cdca9beefa] Running
	I0919 23:36:45.718339   73258 system_pods.go:89] "kube-scheduler-flannel-024908" [4bde65b7-9774-486e-b36b-fd4f0f4ad913] Running
	I0919 23:36:45.718350   73258 system_pods.go:89] "storage-provisioner" [8e0b0c57-dc02-42f5-a76e-71f66be07956] Running
	I0919 23:36:45.718369   73258 retry.go:31] will retry after 2.363488525s: missing components: kube-dns
	I0919 23:36:48.088981   73258 system_pods.go:86] 7 kube-system pods found
	I0919 23:36:48.089014   73258 system_pods.go:89] "coredns-66bc5c9577-6ff4s" [bfeeb5e4-29d3-4cac-a8d8-f50cea911379] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:36:48.089020   73258 system_pods.go:89] "etcd-flannel-024908" [750b89e8-a3d5-4298-8de2-0f8813e3f04f] Running
	I0919 23:36:48.089033   73258 system_pods.go:89] "kube-apiserver-flannel-024908" [6fd5e3be-9011-4355-ac6e-edce6a72430c] Running
	I0919 23:36:48.089036   73258 system_pods.go:89] "kube-controller-manager-flannel-024908" [991db4b7-f11c-492c-bb87-2291620f98c3] Running
	I0919 23:36:48.089041   73258 system_pods.go:89] "kube-proxy-5ch96" [1e8a306d-3338-4763-a77f-02cdca9beefa] Running
	I0919 23:36:48.089044   73258 system_pods.go:89] "kube-scheduler-flannel-024908" [4bde65b7-9774-486e-b36b-fd4f0f4ad913] Running
	I0919 23:36:48.089047   73258 system_pods.go:89] "storage-provisioner" [8e0b0c57-dc02-42f5-a76e-71f66be07956] Running
	I0919 23:36:48.089056   73258 system_pods.go:126] duration metric: took 12.101602867s to wait for k8s-apps to be running ...
	I0919 23:36:48.089063   73258 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 23:36:48.089111   73258 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 23:36:48.111527   73258 system_svc.go:56] duration metric: took 22.453524ms WaitForService to wait for kubelet
	I0919 23:36:48.111558   73258 kubeadm.go:578] duration metric: took 19.314850114s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 23:36:48.111576   73258 node_conditions.go:102] verifying NodePressure condition ...
	I0919 23:36:48.114690   73258 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0919 23:36:48.114739   73258 node_conditions.go:123] node cpu capacity is 2
	I0919 23:36:48.114757   73258 node_conditions.go:105] duration metric: took 3.175386ms to run NodePressure ...
	I0919 23:36:48.114771   73258 start.go:241] waiting for startup goroutines ...
	I0919 23:36:48.114780   73258 start.go:246] waiting for cluster config update ...
	I0919 23:36:48.114795   73258 start.go:255] writing updated cluster config ...
	I0919 23:36:48.158441   73258 ssh_runner.go:195] Run: rm -f paused
	I0919 23:36:48.165144   73258 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 23:36:48.169842   73258 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6ff4s" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:36:48.677739   73258 pod_ready.go:94] pod "coredns-66bc5c9577-6ff4s" is "Ready"
	I0919 23:36:48.677769   73258 pod_ready.go:86] duration metric: took 507.901936ms for pod "coredns-66bc5c9577-6ff4s" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:36:48.680484   73258 pod_ready.go:83] waiting for pod "etcd-flannel-024908" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:36:48.690907   73258 pod_ready.go:94] pod "etcd-flannel-024908" is "Ready"
	I0919 23:36:48.690940   73258 pod_ready.go:86] duration metric: took 10.417306ms for pod "etcd-flannel-024908" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:36:48.782360   73258 pod_ready.go:83] waiting for pod "kube-apiserver-flannel-024908" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:36:48.791110   73258 pod_ready.go:94] pod "kube-apiserver-flannel-024908" is "Ready"
	I0919 23:36:48.791134   73258 pod_ready.go:86] duration metric: took 8.752412ms for pod "kube-apiserver-flannel-024908" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:36:48.794616   73258 pod_ready.go:83] waiting for pod "kube-controller-manager-flannel-024908" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:36:48.970689   73258 pod_ready.go:94] pod "kube-controller-manager-flannel-024908" is "Ready"
	I0919 23:36:48.970713   73258 pod_ready.go:86] duration metric: took 176.076221ms for pod "kube-controller-manager-flannel-024908" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:36:49.172907   73258 pod_ready.go:83] waiting for pod "kube-proxy-5ch96" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:36:49.573296   73258 pod_ready.go:94] pod "kube-proxy-5ch96" is "Ready"
	I0919 23:36:49.573339   73258 pod_ready.go:86] duration metric: took 400.39909ms for pod "kube-proxy-5ch96" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:36:49.770664   73258 pod_ready.go:83] waiting for pod "kube-scheduler-flannel-024908" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:36:50.170961   73258 pod_ready.go:94] pod "kube-scheduler-flannel-024908" is "Ready"
	I0919 23:36:50.170994   73258 pod_ready.go:86] duration metric: took 400.305782ms for pod "kube-scheduler-flannel-024908" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:36:50.171008   73258 pod_ready.go:40] duration metric: took 2.005821553s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 23:36:50.229754   73258 start.go:617] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0919 23:36:50.232046   73258 out.go:179] * Done! kubectl is now configured to use "flannel-024908" cluster and "default" namespace by default
	I0919 23:36:50.117153   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:36:50.118036   75550 main.go:141] libmachine: (bridge-024908) DBG | no network interface addresses found for domain bridge-024908 (source=lease)
	I0919 23:36:50.118062   75550 main.go:141] libmachine: (bridge-024908) DBG | trying to list again with source=arp
	I0919 23:36:50.118448   75550 main.go:141] libmachine: (bridge-024908) DBG | unable to find current IP address of domain bridge-024908 in network mk-bridge-024908 (interfaces detected: [])
	I0919 23:36:50.118492   75550 main.go:141] libmachine: (bridge-024908) DBG | I0919 23:36:50.118415   75578 retry.go:31] will retry after 2.881257511s: waiting for domain to come up
	I0919 23:36:53.003017   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:36:53.003747   75550 main.go:141] libmachine: (bridge-024908) DBG | no network interface addresses found for domain bridge-024908 (source=lease)
	I0919 23:36:53.003776   75550 main.go:141] libmachine: (bridge-024908) DBG | trying to list again with source=arp
	I0919 23:36:53.004219   75550 main.go:141] libmachine: (bridge-024908) DBG | unable to find current IP address of domain bridge-024908 in network mk-bridge-024908 (interfaces detected: [])
	I0919 23:36:53.004295   75550 main.go:141] libmachine: (bridge-024908) DBG | I0919 23:36:53.004201   75578 retry.go:31] will retry after 2.53385353s: waiting for domain to come up
	I0919 23:36:55.540218   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:36:55.540965   75550 main.go:141] libmachine: (bridge-024908) DBG | no network interface addresses found for domain bridge-024908 (source=lease)
	I0919 23:36:55.540991   75550 main.go:141] libmachine: (bridge-024908) DBG | trying to list again with source=arp
	I0919 23:36:55.541332   75550 main.go:141] libmachine: (bridge-024908) DBG | unable to find current IP address of domain bridge-024908 in network mk-bridge-024908 (interfaces detected: [])
	I0919 23:36:55.541356   75550 main.go:141] libmachine: (bridge-024908) DBG | I0919 23:36:55.541296   75578 retry.go:31] will retry after 3.231060911s: waiting for domain to come up
	I0919 23:36:58.774245   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:36:58.775228   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has current primary IP address 192.168.61.181 and MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:36:58.775256   75550 main.go:141] libmachine: (bridge-024908) found domain IP: 192.168.61.181
	I0919 23:36:58.775268   75550 main.go:141] libmachine: (bridge-024908) reserving static IP address...
	I0919 23:36:58.775750   75550 main.go:141] libmachine: (bridge-024908) DBG | unable to find host DHCP lease matching {name: "bridge-024908", mac: "52:54:00:6c:4a:f7", ip: "192.168.61.181"} in network mk-bridge-024908
	I0919 23:36:59.018702   75550 main.go:141] libmachine: (bridge-024908) reserved static IP address 192.168.61.181 for domain bridge-024908
	I0919 23:36:59.018768   75550 main.go:141] libmachine: (bridge-024908) waiting for SSH...
	I0919 23:36:59.018785   75550 main.go:141] libmachine: (bridge-024908) DBG | Getting to WaitForSSH function...
	I0919 23:36:59.023380   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:36:59.023934   75550 main.go:141] libmachine: (bridge-024908) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:4a:f7", ip: ""} in network mk-bridge-024908: {Iface:virbr3 ExpiryTime:2025-09-20 00:36:56 +0000 UTC Type:0 Mac:52:54:00:6c:4a:f7 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:minikube Clientid:01:52:54:00:6c:4a:f7}
	I0919 23:36:59.023964   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined IP address 192.168.61.181 and MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:36:59.024206   75550 main.go:141] libmachine: (bridge-024908) DBG | Using SSH client type: external
	I0919 23:36:59.024232   75550 main.go:141] libmachine: (bridge-024908) DBG | Using SSH private key: /home/jenkins/minikube-integration/21594-14764/.minikube/machines/bridge-024908/id_rsa (-rw-------)
	I0919 23:36:59.024271   75550 main.go:141] libmachine: (bridge-024908) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.181 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21594-14764/.minikube/machines/bridge-024908/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0919 23:36:59.024282   75550 main.go:141] libmachine: (bridge-024908) DBG | About to run SSH command:
	I0919 23:36:59.024295   75550 main.go:141] libmachine: (bridge-024908) DBG | exit 0
	I0919 23:36:59.165433   75550 main.go:141] libmachine: (bridge-024908) DBG | SSH cmd err, output: <nil>: 
	I0919 23:36:59.165781   75550 main.go:141] libmachine: (bridge-024908) domain creation complete
	I0919 23:36:59.166275   75550 main.go:141] libmachine: (bridge-024908) Calling .GetConfigRaw
	I0919 23:36:59.167044   75550 main.go:141] libmachine: (bridge-024908) Calling .DriverName
	I0919 23:36:59.167296   75550 main.go:141] libmachine: (bridge-024908) Calling .DriverName
	I0919 23:36:59.167493   75550 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0919 23:36:59.167510   75550 main.go:141] libmachine: (bridge-024908) Calling .GetState
	I0919 23:36:59.170476   75550 main.go:141] libmachine: Detecting operating system of created instance...
	I0919 23:36:59.170500   75550 main.go:141] libmachine: Waiting for SSH to be available...
	I0919 23:36:59.170508   75550 main.go:141] libmachine: Getting to WaitForSSH function...
	I0919 23:36:59.170517   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHHostname
	I0919 23:36:59.173871   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:36:59.174509   75550 main.go:141] libmachine: (bridge-024908) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:4a:f7", ip: ""} in network mk-bridge-024908: {Iface:virbr3 ExpiryTime:2025-09-20 00:36:56 +0000 UTC Type:0 Mac:52:54:00:6c:4a:f7 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:bridge-024908 Clientid:01:52:54:00:6c:4a:f7}
	I0919 23:36:59.174616   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined IP address 192.168.61.181 and MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:36:59.174993   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHPort
	I0919 23:36:59.175250   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHKeyPath
	I0919 23:36:59.175448   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHKeyPath
	I0919 23:36:59.175656   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHUsername
	I0919 23:36:59.175870   75550 main.go:141] libmachine: Using SSH client type: native
	I0919 23:36:59.176170   75550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.61.181 22 <nil> <nil>}
	I0919 23:36:59.176180   75550 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0919 23:36:59.299849   75550 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 23:36:59.299880   75550 main.go:141] libmachine: Detecting the provisioner...
	I0919 23:36:59.299891   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHHostname
	I0919 23:36:59.303824   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:36:59.304279   75550 main.go:141] libmachine: (bridge-024908) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:4a:f7", ip: ""} in network mk-bridge-024908: {Iface:virbr3 ExpiryTime:2025-09-20 00:36:56 +0000 UTC Type:0 Mac:52:54:00:6c:4a:f7 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:bridge-024908 Clientid:01:52:54:00:6c:4a:f7}
	I0919 23:36:59.304319   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined IP address 192.168.61.181 and MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:36:59.304599   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHPort
	I0919 23:36:59.304848   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHKeyPath
	I0919 23:36:59.305034   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHKeyPath
	I0919 23:36:59.305194   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHUsername
	I0919 23:36:59.305431   75550 main.go:141] libmachine: Using SSH client type: native
	I0919 23:36:59.305642   75550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.61.181 22 <nil> <nil>}
	I0919 23:36:59.305662   75550 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0919 23:36:59.430465   75550 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I0919 23:36:59.430538   75550 main.go:141] libmachine: found compatible host: buildroot
	I0919 23:36:59.430548   75550 main.go:141] libmachine: Provisioning with buildroot...
	I0919 23:36:59.430558   75550 main.go:141] libmachine: (bridge-024908) Calling .GetMachineName
	I0919 23:36:59.430861   75550 buildroot.go:166] provisioning hostname "bridge-024908"
	I0919 23:36:59.430887   75550 main.go:141] libmachine: (bridge-024908) Calling .GetMachineName
	I0919 23:36:59.431096   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHHostname
	I0919 23:36:59.434754   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:36:59.435320   75550 main.go:141] libmachine: (bridge-024908) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:4a:f7", ip: ""} in network mk-bridge-024908: {Iface:virbr3 ExpiryTime:2025-09-20 00:36:56 +0000 UTC Type:0 Mac:52:54:00:6c:4a:f7 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:bridge-024908 Clientid:01:52:54:00:6c:4a:f7}
	I0919 23:36:59.435359   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined IP address 192.168.61.181 and MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:36:59.435582   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHPort
	I0919 23:36:59.435777   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHKeyPath
	I0919 23:36:59.435970   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHKeyPath
	I0919 23:36:59.436113   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHUsername
	I0919 23:36:59.436259   75550 main.go:141] libmachine: Using SSH client type: native
	I0919 23:36:59.436541   75550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.61.181 22 <nil> <nil>}
	I0919 23:36:59.436561   75550 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-024908 && echo "bridge-024908" | sudo tee /etc/hostname
	I0919 23:36:59.578445   75550 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-024908
	
	I0919 23:36:59.578478   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHHostname
	I0919 23:36:59.582245   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:36:59.582752   75550 main.go:141] libmachine: (bridge-024908) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:4a:f7", ip: ""} in network mk-bridge-024908: {Iface:virbr3 ExpiryTime:2025-09-20 00:36:56 +0000 UTC Type:0 Mac:52:54:00:6c:4a:f7 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:bridge-024908 Clientid:01:52:54:00:6c:4a:f7}
	I0919 23:36:59.582781   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined IP address 192.168.61.181 and MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:36:59.583050   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHPort
	I0919 23:36:59.583226   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHKeyPath
	I0919 23:36:59.583431   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHKeyPath
	I0919 23:36:59.583588   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHUsername
	I0919 23:36:59.583813   75550 main.go:141] libmachine: Using SSH client type: native
	I0919 23:36:59.584091   75550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.61.181 22 <nil> <nil>}
	I0919 23:36:59.584117   75550 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-024908' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-024908/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-024908' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 23:36:59.725722   75550 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 23:36:59.725771   75550 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14764/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14764/.minikube}
	I0919 23:36:59.725795   75550 buildroot.go:174] setting up certificates
	I0919 23:36:59.725806   75550 provision.go:84] configureAuth start
	I0919 23:36:59.725818   75550 main.go:141] libmachine: (bridge-024908) Calling .GetMachineName
	I0919 23:36:59.726137   75550 main.go:141] libmachine: (bridge-024908) Calling .GetIP
	I0919 23:36:59.729658   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:36:59.730174   75550 main.go:141] libmachine: (bridge-024908) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:4a:f7", ip: ""} in network mk-bridge-024908: {Iface:virbr3 ExpiryTime:2025-09-20 00:36:56 +0000 UTC Type:0 Mac:52:54:00:6c:4a:f7 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:bridge-024908 Clientid:01:52:54:00:6c:4a:f7}
	I0919 23:36:59.730213   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined IP address 192.168.61.181 and MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:36:59.730440   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHHostname
	I0919 23:36:59.734183   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:36:59.734769   75550 main.go:141] libmachine: (bridge-024908) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:4a:f7", ip: ""} in network mk-bridge-024908: {Iface:virbr3 ExpiryTime:2025-09-20 00:36:56 +0000 UTC Type:0 Mac:52:54:00:6c:4a:f7 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:bridge-024908 Clientid:01:52:54:00:6c:4a:f7}
	I0919 23:36:59.734800   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined IP address 192.168.61.181 and MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:36:59.735045   75550 provision.go:143] copyHostCerts
	I0919 23:36:59.735114   75550 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14764/.minikube/key.pem, removing ...
	I0919 23:36:59.735128   75550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14764/.minikube/key.pem
	I0919 23:36:59.735221   75550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14764/.minikube/key.pem (1679 bytes)
	I0919 23:36:59.735351   75550 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14764/.minikube/ca.pem, removing ...
	I0919 23:36:59.735362   75550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14764/.minikube/ca.pem
	I0919 23:36:59.735405   75550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14764/.minikube/ca.pem (1082 bytes)
	I0919 23:36:59.735510   75550 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14764/.minikube/cert.pem, removing ...
	I0919 23:36:59.735521   75550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14764/.minikube/cert.pem
	I0919 23:36:59.735558   75550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14764/.minikube/cert.pem (1123 bytes)
	I0919 23:36:59.735633   75550 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14764/.minikube/certs/ca-key.pem org=jenkins.bridge-024908 san=[127.0.0.1 192.168.61.181 bridge-024908 localhost minikube]
	I0919 23:36:59.911374   75550 provision.go:177] copyRemoteCerts
	I0919 23:36:59.911428   75550 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 23:36:59.911451   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHHostname
	I0919 23:36:59.915638   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:36:59.916159   75550 main.go:141] libmachine: (bridge-024908) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:4a:f7", ip: ""} in network mk-bridge-024908: {Iface:virbr3 ExpiryTime:2025-09-20 00:36:56 +0000 UTC Type:0 Mac:52:54:00:6c:4a:f7 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:bridge-024908 Clientid:01:52:54:00:6c:4a:f7}
	I0919 23:36:59.916207   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined IP address 192.168.61.181 and MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:36:59.916473   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHPort
	I0919 23:36:59.916749   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHKeyPath
	I0919 23:36:59.916970   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHUsername
	I0919 23:36:59.917121   75550 sshutil.go:53] new ssh client: &{IP:192.168.61.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21594-14764/.minikube/machines/bridge-024908/id_rsa Username:docker}
	I0919 23:37:00.016922   75550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 23:37:00.060261   75550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14764/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 23:37:00.103028   75550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 23:37:00.162378   75550 provision.go:87] duration metric: took 436.56023ms to configureAuth
	I0919 23:37:00.162410   75550 buildroot.go:189] setting minikube options for container-runtime
	I0919 23:37:00.162585   75550 config.go:182] Loaded profile config "bridge-024908": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 23:37:00.162667   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHHostname
	I0919 23:37:00.166952   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:37:00.167481   75550 main.go:141] libmachine: (bridge-024908) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:4a:f7", ip: ""} in network mk-bridge-024908: {Iface:virbr3 ExpiryTime:2025-09-20 00:36:56 +0000 UTC Type:0 Mac:52:54:00:6c:4a:f7 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:bridge-024908 Clientid:01:52:54:00:6c:4a:f7}
	I0919 23:37:00.167509   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined IP address 192.168.61.181 and MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:37:00.167805   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHPort
	I0919 23:37:00.167993   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHKeyPath
	I0919 23:37:00.168148   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHKeyPath
	I0919 23:37:00.168305   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHUsername
	I0919 23:37:00.168814   75550 main.go:141] libmachine: Using SSH client type: native
	I0919 23:37:00.169114   75550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.61.181 22 <nil> <nil>}
	I0919 23:37:00.169139   75550 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 23:37:00.470740   75550 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 23:37:00.470775   75550 main.go:141] libmachine: Checking connection to Docker...
	I0919 23:37:00.470785   75550 main.go:141] libmachine: (bridge-024908) Calling .GetURL
	I0919 23:37:00.473613   75550 main.go:141] libmachine: (bridge-024908) DBG | using libvirt version 8000000
	I0919 23:37:00.477468   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:37:00.477953   75550 main.go:141] libmachine: (bridge-024908) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:4a:f7", ip: ""} in network mk-bridge-024908: {Iface:virbr3 ExpiryTime:2025-09-20 00:36:56 +0000 UTC Type:0 Mac:52:54:00:6c:4a:f7 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:bridge-024908 Clientid:01:52:54:00:6c:4a:f7}
	I0919 23:37:00.477982   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined IP address 192.168.61.181 and MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:37:00.478195   75550 main.go:141] libmachine: Docker is up and running!
	I0919 23:37:00.478215   75550 main.go:141] libmachine: Reticulating splines...
	I0919 23:37:00.478223   75550 client.go:171] duration metric: took 21.827694024s to LocalClient.Create
	I0919 23:37:00.478248   75550 start.go:167] duration metric: took 21.827760989s to libmachine.API.Create "bridge-024908"
	I0919 23:37:00.478261   75550 start.go:293] postStartSetup for "bridge-024908" (driver="kvm2")
	I0919 23:37:00.478273   75550 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 23:37:00.478295   75550 main.go:141] libmachine: (bridge-024908) Calling .DriverName
	I0919 23:37:00.478535   75550 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 23:37:00.478571   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHHostname
	I0919 23:37:00.481680   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:37:00.482153   75550 main.go:141] libmachine: (bridge-024908) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:4a:f7", ip: ""} in network mk-bridge-024908: {Iface:virbr3 ExpiryTime:2025-09-20 00:36:56 +0000 UTC Type:0 Mac:52:54:00:6c:4a:f7 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:bridge-024908 Clientid:01:52:54:00:6c:4a:f7}
	I0919 23:37:00.482184   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined IP address 192.168.61.181 and MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:37:00.482442   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHPort
	I0919 23:37:00.482645   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHKeyPath
	I0919 23:37:00.482831   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHUsername
	I0919 23:37:00.483038   75550 sshutil.go:53] new ssh client: &{IP:192.168.61.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21594-14764/.minikube/machines/bridge-024908/id_rsa Username:docker}
	I0919 23:37:00.576109   75550 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 23:37:00.582200   75550 info.go:137] Remote host: Buildroot 2025.02
	I0919 23:37:00.582243   75550 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14764/.minikube/addons for local assets ...
	I0919 23:37:00.582311   75550 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14764/.minikube/files for local assets ...
	I0919 23:37:00.582384   75550 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14764/.minikube/files/etc/ssl/certs/186712.pem -> 186712.pem in /etc/ssl/certs
	I0919 23:37:00.582478   75550 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 23:37:00.597543   75550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14764/.minikube/files/etc/ssl/certs/186712.pem --> /etc/ssl/certs/186712.pem (1708 bytes)
	I0919 23:37:00.633349   75550 start.go:296] duration metric: took 155.074901ms for postStartSetup
	I0919 23:37:00.633395   75550 main.go:141] libmachine: (bridge-024908) Calling .GetConfigRaw
	I0919 23:37:00.634038   75550 main.go:141] libmachine: (bridge-024908) Calling .GetIP
	I0919 23:37:00.637262   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:37:00.637698   75550 main.go:141] libmachine: (bridge-024908) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:4a:f7", ip: ""} in network mk-bridge-024908: {Iface:virbr3 ExpiryTime:2025-09-20 00:36:56 +0000 UTC Type:0 Mac:52:54:00:6c:4a:f7 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:bridge-024908 Clientid:01:52:54:00:6c:4a:f7}
	I0919 23:37:00.637735   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined IP address 192.168.61.181 and MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:37:00.638136   75550 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908/config.json ...
	I0919 23:37:00.638425   75550 start.go:128] duration metric: took 22.007596337s to createHost
	I0919 23:37:00.638457   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHHostname
	I0919 23:37:00.641395   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:37:00.641854   75550 main.go:141] libmachine: (bridge-024908) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:4a:f7", ip: ""} in network mk-bridge-024908: {Iface:virbr3 ExpiryTime:2025-09-20 00:36:56 +0000 UTC Type:0 Mac:52:54:00:6c:4a:f7 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:bridge-024908 Clientid:01:52:54:00:6c:4a:f7}
	I0919 23:37:00.641897   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined IP address 192.168.61.181 and MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:37:00.642095   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHPort
	I0919 23:37:00.642284   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHKeyPath
	I0919 23:37:00.642435   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHKeyPath
	I0919 23:37:00.642641   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHUsername
	I0919 23:37:00.642868   75550 main.go:141] libmachine: Using SSH client type: native
	I0919 23:37:00.643171   75550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.61.181 22 <nil> <nil>}
	I0919 23:37:00.643190   75550 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0919 23:37:00.768225   75550 main.go:141] libmachine: SSH cmd err, output: <nil>: 1758325020.739336391
	
	I0919 23:37:00.768252   75550 fix.go:216] guest clock: 1758325020.739336391
	I0919 23:37:00.768263   75550 fix.go:229] Guest: 2025-09-19 23:37:00.739336391 +0000 UTC Remote: 2025-09-19 23:37:00.638441688 +0000 UTC m=+22.170446021 (delta=100.894703ms)
	I0919 23:37:00.768291   75550 fix.go:200] guest clock delta is within tolerance: 100.894703ms
	I0919 23:37:00.768299   75550 start.go:83] releasing machines lock for "bridge-024908", held for 22.137606708s
	I0919 23:37:00.768331   75550 main.go:141] libmachine: (bridge-024908) Calling .DriverName
	I0919 23:37:00.768592   75550 main.go:141] libmachine: (bridge-024908) Calling .GetIP
	I0919 23:37:00.773166   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:37:00.773711   75550 main.go:141] libmachine: (bridge-024908) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:4a:f7", ip: ""} in network mk-bridge-024908: {Iface:virbr3 ExpiryTime:2025-09-20 00:36:56 +0000 UTC Type:0 Mac:52:54:00:6c:4a:f7 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:bridge-024908 Clientid:01:52:54:00:6c:4a:f7}
	I0919 23:37:00.773765   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined IP address 192.168.61.181 and MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:37:00.774130   75550 main.go:141] libmachine: (bridge-024908) Calling .DriverName
	I0919 23:37:00.774996   75550 main.go:141] libmachine: (bridge-024908) Calling .DriverName
	I0919 23:37:00.775212   75550 main.go:141] libmachine: (bridge-024908) Calling .DriverName
	I0919 23:37:00.775317   75550 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 23:37:00.775371   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHHostname
	I0919 23:37:00.775710   75550 ssh_runner.go:195] Run: cat /version.json
	I0919 23:37:00.775759   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHHostname
	I0919 23:37:00.780875   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:37:00.782097   75550 main.go:141] libmachine: (bridge-024908) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:4a:f7", ip: ""} in network mk-bridge-024908: {Iface:virbr3 ExpiryTime:2025-09-20 00:36:56 +0000 UTC Type:0 Mac:52:54:00:6c:4a:f7 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:bridge-024908 Clientid:01:52:54:00:6c:4a:f7}
	I0919 23:37:00.782130   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined IP address 192.168.61.181 and MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:37:00.782403   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:37:00.782790   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHPort
	I0919 23:37:00.783061   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHKeyPath
	I0919 23:37:00.783226   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHUsername
	I0919 23:37:00.783351   75550 sshutil.go:53] new ssh client: &{IP:192.168.61.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21594-14764/.minikube/machines/bridge-024908/id_rsa Username:docker}
	I0919 23:37:00.785472   75550 main.go:141] libmachine: (bridge-024908) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:4a:f7", ip: ""} in network mk-bridge-024908: {Iface:virbr3 ExpiryTime:2025-09-20 00:36:56 +0000 UTC Type:0 Mac:52:54:00:6c:4a:f7 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:bridge-024908 Clientid:01:52:54:00:6c:4a:f7}
	I0919 23:37:00.785770   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined IP address 192.168.61.181 and MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:37:00.785870   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHPort
	I0919 23:37:00.786115   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHKeyPath
	I0919 23:37:00.786286   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHUsername
	I0919 23:37:00.786490   75550 sshutil.go:53] new ssh client: &{IP:192.168.61.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21594-14764/.minikube/machines/bridge-024908/id_rsa Username:docker}
	I0919 23:37:00.904049   75550 ssh_runner.go:195] Run: systemctl --version
	I0919 23:37:00.913551   75550 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 23:37:01.100048   75550 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0919 23:37:01.110465   75550 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0919 23:37:01.110531   75550 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 23:37:01.136099   75550 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0919 23:37:01.136136   75550 start.go:495] detecting cgroup driver to use...
	I0919 23:37:01.136222   75550 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 23:37:01.170382   75550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 23:37:01.196560   75550 docker.go:218] disabling cri-docker service (if available) ...
	I0919 23:37:01.196632   75550 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 23:37:01.218930   75550 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 23:37:01.237586   75550 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 23:37:01.410326   75550 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 23:37:01.652921   75550 docker.go:234] disabling docker service ...
	I0919 23:37:01.652995   75550 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 23:37:01.672795   75550 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 23:37:01.691736   75550 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 23:37:01.900102   75550 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 23:37:02.086155   75550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 23:37:02.106794   75550 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 23:37:02.134056   75550 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0919 23:37:02.134154   75550 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 23:37:02.147538   75550 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0919 23:37:02.147639   75550 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 23:37:02.161604   75550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 23:37:02.176520   75550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 23:37:02.192461   75550 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 23:37:02.208868   75550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 23:37:02.224883   75550 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 23:37:02.252496   75550 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 23:37:02.267672   75550 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 23:37:02.283159   75550 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0919 23:37:02.283231   75550 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0919 23:37:02.308362   75550 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 23:37:02.322738   75550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:37:02.512331   75550 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 23:37:02.637192   75550 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 23:37:02.637262   75550 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 23:37:02.643864   75550 start.go:563] Will wait 60s for crictl version
	I0919 23:37:02.643944   75550 ssh_runner.go:195] Run: which crictl
	I0919 23:37:02.648498   75550 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 23:37:02.700160   75550 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0919 23:37:02.700264   75550 ssh_runner.go:195] Run: crio --version
	I0919 23:37:02.734700   75550 ssh_runner.go:195] Run: crio --version
	I0919 23:37:02.773151   75550 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.29.1 ...
	I0919 23:37:02.774609   75550 main.go:141] libmachine: (bridge-024908) Calling .GetIP
	I0919 23:37:02.778039   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:37:02.778469   75550 main.go:141] libmachine: (bridge-024908) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:4a:f7", ip: ""} in network mk-bridge-024908: {Iface:virbr3 ExpiryTime:2025-09-20 00:36:56 +0000 UTC Type:0 Mac:52:54:00:6c:4a:f7 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:bridge-024908 Clientid:01:52:54:00:6c:4a:f7}
	I0919 23:37:02.778496   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined IP address 192.168.61.181 and MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:37:02.778770   75550 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0919 23:37:02.784360   75550 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 23:37:02.805839   75550 kubeadm.go:875] updating cluster {Name:bridge-024908 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.37.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:bridge-024908 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.61.181 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Di
sableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 23:37:02.805991   75550 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 23:37:02.806075   75550 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 23:37:02.850423   75550 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.0". assuming images are not preloaded.
	I0919 23:37:02.850484   75550 ssh_runner.go:195] Run: which lz4
	I0919 23:37:02.856435   75550 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0919 23:37:02.863463   75550 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0919 23:37:02.863498   75550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409455026 bytes)
	I0919 23:37:04.727721   75550 crio.go:462] duration metric: took 1.871326523s to copy over tarball
	I0919 23:37:04.727817   75550 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0919 23:37:06.744137   75550 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.016297676s)
	I0919 23:37:06.744166   75550 crio.go:469] duration metric: took 2.016394617s to extract the tarball
	I0919 23:37:06.744173   75550 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0919 23:37:06.792552   75550 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 23:37:06.851423   75550 crio.go:514] all images are preloaded for cri-o runtime.
	I0919 23:37:06.851455   75550 cache_images.go:85] Images are preloaded, skipping loading
	I0919 23:37:06.851466   75550 kubeadm.go:926] updating node { 192.168.61.181 8443 v1.34.0 crio true true} ...
	I0919 23:37:06.851590   75550 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=bridge-024908 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.181
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:bridge-024908 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0919 23:37:06.851675   75550 ssh_runner.go:195] Run: crio config
	I0919 23:37:06.913893   75550 cni.go:84] Creating CNI manager for "bridge"
	I0919 23:37:06.913930   75550 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 23:37:06.913969   75550 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.181 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-024908 NodeName:bridge-024908 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.181"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.181 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 23:37:06.914162   75550 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.181
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "bridge-024908"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.181"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.181"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 23:37:06.914242   75550 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 23:37:06.929506   75550 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 23:37:06.929625   75550 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0919 23:37:06.944823   75550 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0919 23:37:06.971622   75550 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 23:37:06.996579   75550 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I0919 23:37:07.021773   75550 ssh_runner.go:195] Run: grep 192.168.61.181	control-plane.minikube.internal$ /etc/hosts
	I0919 23:37:07.026560   75550 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.181	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 23:37:07.043397   75550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:37:07.209534   75550 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 23:37:07.232118   75550 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908 for IP: 192.168.61.181
	I0919 23:37:07.232143   75550 certs.go:194] generating shared ca certs ...
	I0919 23:37:07.232158   75550 certs.go:226] acquiring lock for ca certs: {Name:mk1fe71ea89348ba0bd576e99c774a344fba186e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:37:07.232332   75550 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14764/.minikube/ca.key
	I0919 23:37:07.232379   75550 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14764/.minikube/proxy-client-ca.key
	I0919 23:37:07.232393   75550 certs.go:256] generating profile certs ...
	I0919 23:37:07.232459   75550 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908/client.key
	I0919 23:37:07.232478   75550 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908/client.crt with IP's: []
	I0919 23:37:07.278857   75550 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908/client.crt ...
	I0919 23:37:07.278885   75550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908/client.crt: {Name:mk79ccabf3400edf55765f4a8824d93428f42fba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:37:07.279120   75550 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908/client.key ...
	I0919 23:37:07.279138   75550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908/client.key: {Name:mk662a6d2ffc59de416776a7a86f38bc8d65b0b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:37:07.279246   75550 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908/apiserver.key.e0b1cc24
	I0919 23:37:07.279267   75550 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908/apiserver.crt.e0b1cc24 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.181]
	I0919 23:37:07.581312   75550 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908/apiserver.crt.e0b1cc24 ...
	I0919 23:37:07.581341   75550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908/apiserver.crt.e0b1cc24: {Name:mkdfa65ad5651321aa3f30249330f65622547baf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:37:07.581533   75550 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908/apiserver.key.e0b1cc24 ...
	I0919 23:37:07.581548   75550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908/apiserver.key.e0b1cc24: {Name:mkdabb782e88d32fb84eaf1ac02abafa0c83f4b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:37:07.581655   75550 certs.go:381] copying /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908/apiserver.crt.e0b1cc24 -> /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908/apiserver.crt
	I0919 23:37:07.581821   75550 certs.go:385] copying /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908/apiserver.key.e0b1cc24 -> /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908/apiserver.key
	I0919 23:37:07.581920   75550 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908/proxy-client.key
	I0919 23:37:07.581944   75550 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908/proxy-client.crt with IP's: []
	I0919 23:37:08.000959   75550 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908/proxy-client.crt ...
	I0919 23:37:08.000986   75550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908/proxy-client.crt: {Name:mkf85afd35073317efa4a6b19e23641c7a331aff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:37:08.001170   75550 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908/proxy-client.key ...
	I0919 23:37:08.001184   75550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908/proxy-client.key: {Name:mk3f2c425a07ff4d1f574e71c87ee48f134d63bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:37:08.001382   75550 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14764/.minikube/certs/18671.pem (1338 bytes)
	W0919 23:37:08.001415   75550 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14764/.minikube/certs/18671_empty.pem, impossibly tiny 0 bytes
	I0919 23:37:08.001425   75550 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14764/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 23:37:08.001445   75550 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14764/.minikube/certs/ca.pem (1082 bytes)
	I0919 23:37:08.001471   75550 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14764/.minikube/certs/cert.pem (1123 bytes)
	I0919 23:37:08.001492   75550 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14764/.minikube/certs/key.pem (1679 bytes)
	I0919 23:37:08.001529   75550 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14764/.minikube/files/etc/ssl/certs/186712.pem (1708 bytes)
	I0919 23:37:08.002199   75550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 23:37:08.059820   75550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0919 23:37:08.113479   75550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 23:37:08.151056   75550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 23:37:08.188598   75550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0919 23:37:08.222703   75550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 23:37:08.260566   75550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 23:37:08.296326   75550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0919 23:37:08.340647   75550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14764/.minikube/files/etc/ssl/certs/186712.pem --> /usr/share/ca-certificates/186712.pem (1708 bytes)
	I0919 23:37:08.378528   75550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 23:37:08.423651   75550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14764/.minikube/certs/18671.pem --> /usr/share/ca-certificates/18671.pem (1338 bytes)
	I0919 23:37:08.467492   75550 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 23:37:08.495077   75550 ssh_runner.go:195] Run: openssl version
	I0919 23:37:08.502873   75550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 23:37:08.519331   75550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:37:08.527531   75550 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:14 /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:37:08.527600   75550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:37:08.538171   75550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 23:37:08.554573   75550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18671.pem && ln -fs /usr/share/ca-certificates/18671.pem /etc/ssl/certs/18671.pem"
	I0919 23:37:08.571856   75550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18671.pem
	I0919 23:37:08.578662   75550 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:22 /usr/share/ca-certificates/18671.pem
	I0919 23:37:08.578735   75550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18671.pem
	I0919 23:37:08.587689   75550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18671.pem /etc/ssl/certs/51391683.0"
	I0919 23:37:08.603323   75550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/186712.pem && ln -fs /usr/share/ca-certificates/186712.pem /etc/ssl/certs/186712.pem"
	I0919 23:37:08.619563   75550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/186712.pem
	I0919 23:37:08.625749   75550 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:22 /usr/share/ca-certificates/186712.pem
	I0919 23:37:08.625815   75550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/186712.pem
	I0919 23:37:08.634002   75550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/186712.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 23:37:08.650009   75550 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 23:37:08.655522   75550 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 23:37:08.655576   75550 kubeadm.go:392] StartCluster: {Name:bridge-024908 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.37.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:bridge-024908 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.61.181 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 23:37:08.655638   75550 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0919 23:37:08.655689   75550 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 23:37:08.698969   75550 cri.go:89] found id: ""
	I0919 23:37:08.699041   75550 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 23:37:08.716698   75550 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 23:37:08.731322   75550 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 23:37:08.751983   75550 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 23:37:08.752002   75550 kubeadm.go:157] found existing configuration files:
	
	I0919 23:37:08.752058   75550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0919 23:37:08.767032   75550 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0919 23:37:08.767090   75550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0919 23:37:08.783316   75550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0919 23:37:08.795760   75550 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0919 23:37:08.795814   75550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0919 23:37:08.809265   75550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0919 23:37:08.822265   75550 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0919 23:37:08.822326   75550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0919 23:37:08.836477   75550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0919 23:37:08.848426   75550 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0919 23:37:08.848515   75550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0919 23:37:08.862265   75550 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0919 23:37:08.922610   75550 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0919 23:37:08.922696   75550 kubeadm.go:310] [preflight] Running pre-flight checks
	I0919 23:37:09.034699   75550 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 23:37:09.034883   75550 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 23:37:09.035086   75550 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0919 23:37:09.047666   75550 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 23:37:09.321823   75550 out.go:252]   - Generating certificates and keys ...
	I0919 23:37:09.321968   75550 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0919 23:37:09.322073   75550 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0919 23:37:09.322184   75550 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0919 23:37:09.347963   75550 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0919 23:37:09.482372   75550 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0919 23:37:09.853248   75550 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0919 23:37:10.322469   75550 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0919 23:37:10.322660   75550 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [bridge-024908 localhost] and IPs [192.168.61.181 127.0.0.1 ::1]
	I0919 23:37:10.527309   75550 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0919 23:37:10.527480   75550 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [bridge-024908 localhost] and IPs [192.168.61.181 127.0.0.1 ::1]
	I0919 23:37:10.753826   75550 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0919 23:37:11.368432   75550 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0919 23:37:11.755436   75550 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0919 23:37:11.755668   75550 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 23:37:12.305982   75550 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 23:37:12.696979   75550 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0919 23:37:12.850075   75550 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 23:37:13.115083   75550 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 23:37:13.231334   75550 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 23:37:13.232026   75550 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 23:37:13.234652   75550 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 23:37:13.303712   75550 out.go:252]   - Booting up control plane ...
	I0919 23:37:13.303896   75550 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 23:37:13.304014   75550 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 23:37:13.304160   75550 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 23:37:13.304311   75550 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 23:37:13.304450   75550 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0919 23:37:13.304603   75550 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0919 23:37:13.304756   75550 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 23:37:13.304824   75550 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0919 23:37:13.487400   75550 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0919 23:37:13.487533   75550 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0919 23:37:14.491714   75550 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.003425478s
	I0919 23:37:14.495747   75550 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0919 23:37:14.495873   75550 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.61.181:8443/livez
	I0919 23:37:14.496903   75550 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0919 23:37:14.497032   75550 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0919 23:37:18.996516   75550 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 4.501987206s
	I0919 23:37:19.111283   75550 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 4.617352883s
	I0919 23:37:21.494958   75550 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 7.001624831s
	I0919 23:37:21.517641   75550 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 23:37:21.532079   75550 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 23:37:21.551635   75550 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 23:37:21.551915   75550 kubeadm.go:310] [mark-control-plane] Marking the node bridge-024908 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 23:37:21.573788   75550 kubeadm.go:310] [bootstrap-token] Using token: oan5k4.y35iylaimwyxz31p
	I0919 23:37:21.575410   75550 out.go:252]   - Configuring RBAC rules ...
	I0919 23:37:21.575576   75550 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 23:37:21.586494   75550 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 23:37:21.598851   75550 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 23:37:21.603078   75550 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 23:37:21.608646   75550 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 23:37:21.613569   75550 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 23:37:21.904188   75550 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 23:37:22.392583   75550 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0919 23:37:22.905202   75550 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0919 23:37:22.907292   75550 kubeadm.go:310] 
	I0919 23:37:22.907390   75550 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0919 23:37:22.907401   75550 kubeadm.go:310] 
	I0919 23:37:22.907498   75550 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0919 23:37:22.907507   75550 kubeadm.go:310] 
	I0919 23:37:22.907580   75550 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0919 23:37:22.907702   75550 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 23:37:22.907802   75550 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 23:37:22.907816   75550 kubeadm.go:310] 
	I0919 23:37:22.907916   75550 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0919 23:37:22.907930   75550 kubeadm.go:310] 
	I0919 23:37:22.907997   75550 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 23:37:22.908008   75550 kubeadm.go:310] 
	I0919 23:37:22.908080   75550 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0919 23:37:22.908200   75550 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 23:37:22.908319   75550 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 23:37:22.908332   75550 kubeadm.go:310] 
	I0919 23:37:22.908458   75550 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 23:37:22.908577   75550 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0919 23:37:22.908594   75550 kubeadm.go:310] 
	I0919 23:37:22.908705   75550 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token oan5k4.y35iylaimwyxz31p \
	I0919 23:37:22.908853   75550 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:764767ee84c2df0ad4ae14ef93303d4368042da5603c686ffbd3dbfd5d1666a5 \
	I0919 23:37:22.908884   75550 kubeadm.go:310] 	--control-plane 
	I0919 23:37:22.908891   75550 kubeadm.go:310] 
	I0919 23:37:22.909004   75550 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0919 23:37:22.909023   75550 kubeadm.go:310] 
	I0919 23:37:22.909139   75550 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token oan5k4.y35iylaimwyxz31p \
	I0919 23:37:22.909286   75550 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:764767ee84c2df0ad4ae14ef93303d4368042da5603c686ffbd3dbfd5d1666a5 
	I0919 23:37:22.914281   75550 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 23:37:22.914320   75550 cni.go:84] Creating CNI manager for "bridge"
	I0919 23:37:22.917094   75550 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I0919 23:37:22.918525   75550 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0919 23:37:22.935377   75550 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0919 23:37:22.966357   75550 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 23:37:22.966416   75550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:37:22.966464   75550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-024908 minikube.k8s.io/updated_at=2025_09_19T23_37_22_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=bridge-024908 minikube.k8s.io/primary=true
	I0919 23:37:23.203789   75550 ops.go:34] apiserver oom_adj: -16
	I0919 23:37:23.203866   75550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:37:23.704828   75550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:37:24.204397   75550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:37:24.704905   75550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:37:25.204337   75550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:37:25.704067   75550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:37:26.204817   75550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:37:26.703983   75550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:37:27.204824   75550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:37:27.704200   75550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:37:27.797627   75550 kubeadm.go:1105] duration metric: took 4.831273881s to wait for elevateKubeSystemPrivileges
	I0919 23:37:27.797666   75550 kubeadm.go:394] duration metric: took 19.142094171s to StartCluster
	I0919 23:37:27.797683   75550 settings.go:142] acquiring lock: {Name:mk9e6bfe60e4d22990b0b362d40b65315947b582 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:37:27.797765   75550 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21594-14764/kubeconfig
	I0919 23:37:27.798983   75550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14764/kubeconfig: {Name:mk29db95201211dec339ee278b6433541126d194 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:37:27.799265   75550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 23:37:27.799323   75550 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.61.181 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 23:37:27.799517   75550 config.go:182] Loaded profile config "bridge-024908": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 23:37:27.799481   75550 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0919 23:37:27.799571   75550 addons.go:69] Setting storage-provisioner=true in profile "bridge-024908"
	I0919 23:37:27.799605   75550 addons.go:69] Setting default-storageclass=true in profile "bridge-024908"
	I0919 23:37:27.799629   75550 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "bridge-024908"
	I0919 23:37:27.799659   75550 addons.go:238] Setting addon storage-provisioner=true in "bridge-024908"
	I0919 23:37:27.799698   75550 host.go:66] Checking if "bridge-024908" exists ...
	I0919 23:37:27.800067   75550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 23:37:27.800089   75550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 23:37:27.800103   75550 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 23:37:27.800128   75550 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 23:37:27.804391   75550 out.go:179] * Verifying Kubernetes components...
	I0919 23:37:27.806295   75550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:37:27.815312   75550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32923
	I0919 23:37:27.815831   75550 main.go:141] libmachine: () Calling .GetVersion
	I0919 23:37:27.816354   75550 main.go:141] libmachine: Using API Version  1
	I0919 23:37:27.816382   75550 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 23:37:27.816817   75550 main.go:141] libmachine: () Calling .GetMachineName
	I0919 23:37:27.817042   75550 main.go:141] libmachine: (bridge-024908) Calling .GetState
	I0919 23:37:27.817944   75550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46285
	I0919 23:37:27.818342   75550 main.go:141] libmachine: () Calling .GetVersion
	I0919 23:37:27.818860   75550 main.go:141] libmachine: Using API Version  1
	I0919 23:37:27.818885   75550 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 23:37:27.819383   75550 main.go:141] libmachine: () Calling .GetMachineName
	I0919 23:37:27.820077   75550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 23:37:27.820125   75550 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 23:37:27.820715   75550 addons.go:238] Setting addon default-storageclass=true in "bridge-024908"
	I0919 23:37:27.820768   75550 host.go:66] Checking if "bridge-024908" exists ...
	I0919 23:37:27.821068   75550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 23:37:27.821113   75550 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 23:37:27.834971   75550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45933
	I0919 23:37:27.835369   75550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43609
	I0919 23:37:27.835635   75550 main.go:141] libmachine: () Calling .GetVersion
	I0919 23:37:27.835973   75550 main.go:141] libmachine: () Calling .GetVersion
	I0919 23:37:27.836163   75550 main.go:141] libmachine: Using API Version  1
	I0919 23:37:27.836184   75550 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 23:37:27.836504   75550 main.go:141] libmachine: Using API Version  1
	I0919 23:37:27.836528   75550 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 23:37:27.836599   75550 main.go:141] libmachine: () Calling .GetMachineName
	I0919 23:37:27.836825   75550 main.go:141] libmachine: (bridge-024908) Calling .GetState
	I0919 23:37:27.837003   75550 main.go:141] libmachine: () Calling .GetMachineName
	I0919 23:37:27.837582   75550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 23:37:27.837629   75550 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 23:37:27.839286   75550 main.go:141] libmachine: (bridge-024908) Calling .DriverName
	I0919 23:37:27.844832   75550 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 23:37:27.846566   75550 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 23:37:27.846593   75550 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 23:37:27.846624   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHHostname
	I0919 23:37:27.851603   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:37:27.852222   75550 main.go:141] libmachine: (bridge-024908) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:4a:f7", ip: ""} in network mk-bridge-024908: {Iface:virbr3 ExpiryTime:2025-09-20 00:36:56 +0000 UTC Type:0 Mac:52:54:00:6c:4a:f7 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:bridge-024908 Clientid:01:52:54:00:6c:4a:f7}
	I0919 23:37:27.852251   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined IP address 192.168.61.181 and MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:37:27.852551   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHPort
	I0919 23:37:27.852885   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHKeyPath
	I0919 23:37:27.853113   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHUsername
	I0919 23:37:27.853322   75550 sshutil.go:53] new ssh client: &{IP:192.168.61.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21594-14764/.minikube/machines/bridge-024908/id_rsa Username:docker}
	I0919 23:37:27.855262   75550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36445
	I0919 23:37:27.855675   75550 main.go:141] libmachine: () Calling .GetVersion
	I0919 23:37:27.856191   75550 main.go:141] libmachine: Using API Version  1
	I0919 23:37:27.856215   75550 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 23:37:27.856635   75550 main.go:141] libmachine: () Calling .GetMachineName
	I0919 23:37:27.856897   75550 main.go:141] libmachine: (bridge-024908) Calling .GetState
	I0919 23:37:27.859183   75550 main.go:141] libmachine: (bridge-024908) Calling .DriverName
	I0919 23:37:27.859455   75550 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 23:37:27.859488   75550 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 23:37:27.859511   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHHostname
	I0919 23:37:27.863208   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:37:27.863776   75550 main.go:141] libmachine: (bridge-024908) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:4a:f7", ip: ""} in network mk-bridge-024908: {Iface:virbr3 ExpiryTime:2025-09-20 00:36:56 +0000 UTC Type:0 Mac:52:54:00:6c:4a:f7 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:bridge-024908 Clientid:01:52:54:00:6c:4a:f7}
	I0919 23:37:27.863807   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined IP address 192.168.61.181 and MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:37:27.864034   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHPort
	I0919 23:37:27.864279   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHKeyPath
	I0919 23:37:27.864464   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHUsername
	I0919 23:37:27.864643   75550 sshutil.go:53] new ssh client: &{IP:192.168.61.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21594-14764/.minikube/machines/bridge-024908/id_rsa Username:docker}
	I0919 23:37:28.087700   75550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0919 23:37:28.113531   75550 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 23:37:28.426265   75550 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 23:37:28.431992   75550 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 23:37:28.884920   75550 start.go:976] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0919 23:37:28.886130   75550 node_ready.go:35] waiting up to 15m0s for node "bridge-024908" to be "Ready" ...
	I0919 23:37:28.906015   75550 node_ready.go:49] node "bridge-024908" is "Ready"
	I0919 23:37:28.906049   75550 node_ready.go:38] duration metric: took 19.859764ms for node "bridge-024908" to be "Ready" ...
	I0919 23:37:28.906066   75550 api_server.go:52] waiting for apiserver process to appear ...
	I0919 23:37:28.906123   75550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 23:37:29.391916   75550 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-024908" context rescaled to 1 replicas
	I0919 23:37:29.498850   75550 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.072552362s)
	I0919 23:37:29.498904   75550 main.go:141] libmachine: Making call to close driver server
	I0919 23:37:29.498915   75550 main.go:141] libmachine: (bridge-024908) Calling .Close
	I0919 23:37:29.498918   75550 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.066894416s)
	I0919 23:37:29.498957   75550 api_server.go:72] duration metric: took 1.699601344s to wait for apiserver process to appear ...
	I0919 23:37:29.498993   75550 api_server.go:88] waiting for apiserver healthz status ...
	I0919 23:37:29.499014   75550 api_server.go:253] Checking apiserver healthz at https://192.168.61.181:8443/healthz ...
	I0919 23:37:29.498961   75550 main.go:141] libmachine: Making call to close driver server
	I0919 23:37:29.499172   75550 main.go:141] libmachine: (bridge-024908) Calling .Close
	I0919 23:37:29.499245   75550 main.go:141] libmachine: Successfully made call to close driver server
	I0919 23:37:29.499260   75550 main.go:141] libmachine: (bridge-024908) DBG | Closing plugin on server side
	I0919 23:37:29.499268   75550 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 23:37:29.499277   75550 main.go:141] libmachine: Making call to close driver server
	I0919 23:37:29.499284   75550 main.go:141] libmachine: (bridge-024908) Calling .Close
	I0919 23:37:29.499484   75550 main.go:141] libmachine: Successfully made call to close driver server
	I0919 23:37:29.499489   75550 main.go:141] libmachine: (bridge-024908) DBG | Closing plugin on server side
	I0919 23:37:29.499500   75550 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 23:37:29.499508   75550 main.go:141] libmachine: Making call to close driver server
	I0919 23:37:29.499516   75550 main.go:141] libmachine: (bridge-024908) Calling .Close
	I0919 23:37:29.499517   75550 main.go:141] libmachine: (bridge-024908) DBG | Closing plugin on server side
	I0919 23:37:29.499541   75550 main.go:141] libmachine: Successfully made call to close driver server
	I0919 23:37:29.499552   75550 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 23:37:29.499784   75550 main.go:141] libmachine: Successfully made call to close driver server
	I0919 23:37:29.499798   75550 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 23:37:29.548296   75550 api_server.go:279] https://192.168.61.181:8443/healthz returned 200:
	ok
	I0919 23:37:29.549453   75550 main.go:141] libmachine: Making call to close driver server
	I0919 23:37:29.549472   75550 main.go:141] libmachine: (bridge-024908) Calling .Close
	I0919 23:37:29.549843   75550 main.go:141] libmachine: Successfully made call to close driver server
	I0919 23:37:29.549888   75550 main.go:141] libmachine: (bridge-024908) DBG | Closing plugin on server side
	I0919 23:37:29.549904   75550 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 23:37:29.551407   75550 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0919 23:37:29.552566   75550 addons.go:514] duration metric: took 1.753093123s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0919 23:37:29.552812   75550 api_server.go:141] control plane version: v1.34.0
	I0919 23:37:29.552841   75550 api_server.go:131] duration metric: took 53.840085ms to wait for apiserver health ...
	I0919 23:37:29.552851   75550 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 23:37:29.574521   75550 system_pods.go:59] 8 kube-system pods found
	I0919 23:37:29.574560   75550 system_pods.go:61] "coredns-66bc5c9577-f6f8d" [d1e7873e-6875-4bbe-8193-28c6bf3b050e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:37:29.574569   75550 system_pods.go:61] "coredns-66bc5c9577-nnctc" [1e4e50ce-029c-40bd-a5fe-cf3811bd00db] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:37:29.574574   75550 system_pods.go:61] "etcd-bridge-024908" [5538ee2d-8602-442d-a181-ce78f7bd9108] Running
	I0919 23:37:29.574582   75550 system_pods.go:61] "kube-apiserver-bridge-024908" [05626e88-3d56-41a5-b965-3b2c86e50618] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 23:37:29.574589   75550 system_pods.go:61] "kube-controller-manager-bridge-024908" [ad57c2f7-909c-4182-98f8-c1bf467a572a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 23:37:29.574631   75550 system_pods.go:61] "kube-proxy-vswk4" [abc89b80-f24a-4b1d-9553-6c821e379d81] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:37:29.574637   75550 system_pods.go:61] "kube-scheduler-bridge-024908" [7d4c1538-dd14-4373-85b5-15dfcdaea017] Running
	I0919 23:37:29.574644   75550 system_pods.go:61] "storage-provisioner" [811ae028-d815-429e-9b89-15d35c74844d] Pending
	I0919 23:37:29.574650   75550 system_pods.go:74] duration metric: took 21.793524ms to wait for pod list to return data ...
	I0919 23:37:29.574664   75550 default_sa.go:34] waiting for default service account to be created ...
	I0919 23:37:29.606010   75550 default_sa.go:45] found service account: "default"
	I0919 23:37:29.606041   75550 default_sa.go:55] duration metric: took 31.369318ms for default service account to be created ...
	I0919 23:37:29.606052   75550 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 23:37:29.633174   75550 system_pods.go:86] 8 kube-system pods found
	I0919 23:37:29.633211   75550 system_pods.go:89] "coredns-66bc5c9577-f6f8d" [d1e7873e-6875-4bbe-8193-28c6bf3b050e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:37:29.633222   75550 system_pods.go:89] "coredns-66bc5c9577-nnctc" [1e4e50ce-029c-40bd-a5fe-cf3811bd00db] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:37:29.633232   75550 system_pods.go:89] "etcd-bridge-024908" [5538ee2d-8602-442d-a181-ce78f7bd9108] Running
	I0919 23:37:29.633241   75550 system_pods.go:89] "kube-apiserver-bridge-024908" [05626e88-3d56-41a5-b965-3b2c86e50618] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 23:37:29.633250   75550 system_pods.go:89] "kube-controller-manager-bridge-024908" [ad57c2f7-909c-4182-98f8-c1bf467a572a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 23:37:29.633260   75550 system_pods.go:89] "kube-proxy-vswk4" [abc89b80-f24a-4b1d-9553-6c821e379d81] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:37:29.633269   75550 system_pods.go:89] "kube-scheduler-bridge-024908" [7d4c1538-dd14-4373-85b5-15dfcdaea017] Running
	I0919 23:37:29.633278   75550 system_pods.go:89] "storage-provisioner" [811ae028-d815-429e-9b89-15d35c74844d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 23:37:29.633306   75550 retry.go:31] will retry after 283.461542ms: missing components: kube-dns, kube-proxy
	I0919 23:37:29.924680   75550 system_pods.go:86] 8 kube-system pods found
	I0919 23:37:29.924717   75550 system_pods.go:89] "coredns-66bc5c9577-f6f8d" [d1e7873e-6875-4bbe-8193-28c6bf3b050e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:37:29.924740   75550 system_pods.go:89] "coredns-66bc5c9577-nnctc" [1e4e50ce-029c-40bd-a5fe-cf3811bd00db] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:37:29.924748   75550 system_pods.go:89] "etcd-bridge-024908" [5538ee2d-8602-442d-a181-ce78f7bd9108] Running
	I0919 23:37:29.924757   75550 system_pods.go:89] "kube-apiserver-bridge-024908" [05626e88-3d56-41a5-b965-3b2c86e50618] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 23:37:29.924766   75550 system_pods.go:89] "kube-controller-manager-bridge-024908" [ad57c2f7-909c-4182-98f8-c1bf467a572a] Running
	I0919 23:37:29.924775   75550 system_pods.go:89] "kube-proxy-vswk4" [abc89b80-f24a-4b1d-9553-6c821e379d81] Running
	I0919 23:37:29.924780   75550 system_pods.go:89] "kube-scheduler-bridge-024908" [7d4c1538-dd14-4373-85b5-15dfcdaea017] Running
	I0919 23:37:29.924788   75550 system_pods.go:89] "storage-provisioner" [811ae028-d815-429e-9b89-15d35c74844d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 23:37:29.924809   75550 retry.go:31] will retry after 331.047278ms: missing components: kube-dns
	I0919 23:37:30.260743   75550 system_pods.go:86] 8 kube-system pods found
	I0919 23:37:30.260779   75550 system_pods.go:89] "coredns-66bc5c9577-f6f8d" [d1e7873e-6875-4bbe-8193-28c6bf3b050e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:37:30.260786   75550 system_pods.go:89] "coredns-66bc5c9577-nnctc" [1e4e50ce-029c-40bd-a5fe-cf3811bd00db] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:37:30.260793   75550 system_pods.go:89] "etcd-bridge-024908" [5538ee2d-8602-442d-a181-ce78f7bd9108] Running
	I0919 23:37:30.260800   75550 system_pods.go:89] "kube-apiserver-bridge-024908" [05626e88-3d56-41a5-b965-3b2c86e50618] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 23:37:30.260804   75550 system_pods.go:89] "kube-controller-manager-bridge-024908" [ad57c2f7-909c-4182-98f8-c1bf467a572a] Running
	I0919 23:37:30.260807   75550 system_pods.go:89] "kube-proxy-vswk4" [abc89b80-f24a-4b1d-9553-6c821e379d81] Running
	I0919 23:37:30.260811   75550 system_pods.go:89] "kube-scheduler-bridge-024908" [7d4c1538-dd14-4373-85b5-15dfcdaea017] Running
	I0919 23:37:30.260815   75550 system_pods.go:89] "storage-provisioner" [811ae028-d815-429e-9b89-15d35c74844d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 23:37:30.260828   75550 retry.go:31] will retry after 445.277292ms: missing components: kube-dns
	I0919 23:37:30.712500   75550 system_pods.go:86] 8 kube-system pods found
	I0919 23:37:30.712531   75550 system_pods.go:89] "coredns-66bc5c9577-f6f8d" [d1e7873e-6875-4bbe-8193-28c6bf3b050e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:37:30.712542   75550 system_pods.go:89] "coredns-66bc5c9577-nnctc" [1e4e50ce-029c-40bd-a5fe-cf3811bd00db] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:37:30.712548   75550 system_pods.go:89] "etcd-bridge-024908" [5538ee2d-8602-442d-a181-ce78f7bd9108] Running
	I0919 23:37:30.712558   75550 system_pods.go:89] "kube-apiserver-bridge-024908" [05626e88-3d56-41a5-b965-3b2c86e50618] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 23:37:30.712562   75550 system_pods.go:89] "kube-controller-manager-bridge-024908" [ad57c2f7-909c-4182-98f8-c1bf467a572a] Running
	I0919 23:37:30.712566   75550 system_pods.go:89] "kube-proxy-vswk4" [abc89b80-f24a-4b1d-9553-6c821e379d81] Running
	I0919 23:37:30.712569   75550 system_pods.go:89] "kube-scheduler-bridge-024908" [7d4c1538-dd14-4373-85b5-15dfcdaea017] Running
	I0919 23:37:30.712572   75550 system_pods.go:89] "storage-provisioner" [811ae028-d815-429e-9b89-15d35c74844d] Running
	I0919 23:37:30.712579   75550 system_pods.go:126] duration metric: took 1.106521028s to wait for k8s-apps to be running ...
	I0919 23:37:30.712585   75550 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 23:37:30.712643   75550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 23:37:30.732487   75550 system_svc.go:56] duration metric: took 19.893849ms WaitForService to wait for kubelet
	I0919 23:37:30.732518   75550 kubeadm.go:578] duration metric: took 2.933164933s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 23:37:30.732534   75550 node_conditions.go:102] verifying NodePressure condition ...
	I0919 23:37:30.736021   75550 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0919 23:37:30.736057   75550 node_conditions.go:123] node cpu capacity is 2
	I0919 23:37:30.736074   75550 node_conditions.go:105] duration metric: took 3.536029ms to run NodePressure ...
	I0919 23:37:30.736084   75550 start.go:241] waiting for startup goroutines ...
	I0919 23:37:30.736093   75550 start.go:246] waiting for cluster config update ...
	I0919 23:37:30.736106   75550 start.go:255] writing updated cluster config ...
	I0919 23:37:30.736425   75550 ssh_runner.go:195] Run: rm -f paused
	I0919 23:37:30.742435   75550 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 23:37:30.747825   75550 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-f6f8d" in "kube-system" namespace to be "Ready" or be gone ...
	W0919 23:37:32.755467   75550 pod_ready.go:104] pod "coredns-66bc5c9577-f6f8d" is not "Ready", error: <nil>
	W0919 23:37:35.254202   75550 pod_ready.go:104] pod "coredns-66bc5c9577-f6f8d" is not "Ready", error: <nil>
	W0919 23:37:37.255433   75550 pod_ready.go:104] pod "coredns-66bc5c9577-f6f8d" is not "Ready", error: <nil>
	W0919 23:37:39.256877   75550 pod_ready.go:104] pod "coredns-66bc5c9577-f6f8d" is not "Ready", error: <nil>
	I0919 23:37:40.751573   75550 pod_ready.go:99] pod "coredns-66bc5c9577-f6f8d" in "kube-system" namespace is gone: getting pod "coredns-66bc5c9577-f6f8d" in "kube-system" namespace (will retry): pods "coredns-66bc5c9577-f6f8d" not found
	I0919 23:37:40.751605   75550 pod_ready.go:86] duration metric: took 10.00375082s for pod "coredns-66bc5c9577-f6f8d" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:37:40.751644   75550 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-nnctc" in "kube-system" namespace to be "Ready" or be gone ...
	W0919 23:37:42.757893   75550 pod_ready.go:104] pod "coredns-66bc5c9577-nnctc" is not "Ready", error: <nil>
	W0919 23:37:45.257639   75550 pod_ready.go:104] pod "coredns-66bc5c9577-nnctc" is not "Ready", error: <nil>
	W0919 23:37:47.258820   75550 pod_ready.go:104] pod "coredns-66bc5c9577-nnctc" is not "Ready", error: <nil>
	W0919 23:37:49.260985   75550 pod_ready.go:104] pod "coredns-66bc5c9577-nnctc" is not "Ready", error: <nil>
	W0919 23:37:51.758752   75550 pod_ready.go:104] pod "coredns-66bc5c9577-nnctc" is not "Ready", error: <nil>
	W0919 23:37:53.758976   75550 pod_ready.go:104] pod "coredns-66bc5c9577-nnctc" is not "Ready", error: <nil>
	W0919 23:37:56.258048   75550 pod_ready.go:104] pod "coredns-66bc5c9577-nnctc" is not "Ready", error: <nil>
	W0919 23:37:58.258337   75550 pod_ready.go:104] pod "coredns-66bc5c9577-nnctc" is not "Ready", error: <nil>
	W0919 23:38:00.764113   75550 pod_ready.go:104] pod "coredns-66bc5c9577-nnctc" is not "Ready", error: <nil>
	W0919 23:38:03.259818   75550 pod_ready.go:104] pod "coredns-66bc5c9577-nnctc" is not "Ready", error: <nil>
	I0919 23:38:04.757900   75550 pod_ready.go:94] pod "coredns-66bc5c9577-nnctc" is "Ready"
	I0919 23:38:04.757930   75550 pod_ready.go:86] duration metric: took 24.006278671s for pod "coredns-66bc5c9577-nnctc" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:38:04.761252   75550 pod_ready.go:83] waiting for pod "etcd-bridge-024908" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:38:04.766557   75550 pod_ready.go:94] pod "etcd-bridge-024908" is "Ready"
	I0919 23:38:04.766585   75550 pod_ready.go:86] duration metric: took 5.299603ms for pod "etcd-bridge-024908" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:38:04.769274   75550 pod_ready.go:83] waiting for pod "kube-apiserver-bridge-024908" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:38:04.775608   75550 pod_ready.go:94] pod "kube-apiserver-bridge-024908" is "Ready"
	I0919 23:38:04.775639   75550 pod_ready.go:86] duration metric: took 6.336927ms for pod "kube-apiserver-bridge-024908" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:38:04.778115   75550 pod_ready.go:83] waiting for pod "kube-controller-manager-bridge-024908" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:38:04.956080   75550 pod_ready.go:94] pod "kube-controller-manager-bridge-024908" is "Ready"
	I0919 23:38:04.956114   75550 pod_ready.go:86] duration metric: took 177.969622ms for pod "kube-controller-manager-bridge-024908" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:38:05.156159   75550 pod_ready.go:83] waiting for pod "kube-proxy-vswk4" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:38:05.556114   75550 pod_ready.go:94] pod "kube-proxy-vswk4" is "Ready"
	I0919 23:38:05.556141   75550 pod_ready.go:86] duration metric: took 399.958599ms for pod "kube-proxy-vswk4" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:38:05.756089   75550 pod_ready.go:83] waiting for pod "kube-scheduler-bridge-024908" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:38:06.155987   75550 pod_ready.go:94] pod "kube-scheduler-bridge-024908" is "Ready"
	I0919 23:38:06.156017   75550 pod_ready.go:86] duration metric: took 399.900092ms for pod "kube-scheduler-bridge-024908" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:38:06.156030   75550 pod_ready.go:40] duration metric: took 35.413562399s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 23:38:06.202255   75550 start.go:617] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0919 23:38:06.204902   75550 out.go:179] * Done! kubectl is now configured to use "bridge-024908" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 19 23:45:47 default-k8s-diff-port-304197 crio[882]: time="2025-09-19 23:45:47.816936990Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=05115af8-6e9f-4939-9ddc-4b183ce56423 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 23:45:47 default-k8s-diff-port-304197 crio[882]: time="2025-09-19 23:45:47.817157524Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d535d833bf717efb956cf2333c1b066d736288d6430c819636c8898614705cba,PodSandboxId:7dc46f2e1c62053024540103a673c22fa46949c4bd447d6861b210bb30b60eff,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:6,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1758325353635515732,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-6ffb444bf9-mwz54,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 911a6b3c-3441-40a6-ac54-04cf424c179b,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.k
ubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ec2522d96a5ed9e24e9ecf32e0d0c252795dcd9c98c44a5b6a16b9ea4a1a9e2,PodSandboxId:206391a3986c3bb63b39cf935fc260351d4023074f7b7aba696e1594757a1cef,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1758325038465690059,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 874b675f-ecbe-4052-a6fb-bc7a6028db03,},Annotations
:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:375a7d2fd8f93e60f9c3abc0d93eabc7ac7f390406b748cd91b0f70fa45e969d,PodSandboxId:1b559b4a4e82ced80c6bfe6ad4227f9e84668422a12a82317372e8b4f4c11dab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1758325025436851219,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3321717a-b901-415e-b199-977471c0ff1f,},Annotations:map[string]
string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34682f7775d4f0930f8412d24f895144c1151ebff6abf2b263cad9f204138b14,PodSandboxId:1de7d351999a23a7f5726463eb34fd9c478fec111f541435517cd97a2cc3a5a5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1758325001966050992,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-qxgj9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6340754-da46-4e31-9f54-feec6a797beb,},Annotations:map[string]string{io.kubernetes.container.
hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74a89ff1d5c44294b0a6b690b20f0b8f8fc88e97e56917b2944fb036c040abdb,PodSandboxId:da97ab890a843eb13424019f4a031b251cba4d13263e684590ede4a3203ac1b7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b
97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1758324994368240805,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hr2bk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02b8b6af-3927-4e0c-a567-28aca5e8cd79,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3436dea0618688158ab7ae5e858bc14d034997b2f4f38b97b32b4274515a49ce,PodSandboxId:1b559b4a4e82ced80c6bfe6ad4227f9e84668422a12a82317372e8b4f4c11dab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_EXITED,CreatedAt:1758324994446061048,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3321717a-b901-415e-b199-977471c0ff1f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e5e899743d432ddcb109979a1ab6e43a88582b10f19fce066e46d5777c20fd2,PodSandboxId:f773b827713b4e5ececda93e3a2c843c58d602181aa64c2ea25ab84b0029e3eb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTA
INER_RUNNING,CreatedAt:1758324988850025144,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-304197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b8cf1138d801c35b4e4cb07a863160e,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b446ba7b5e7f19fe9fc931d0c6005d9015217d26588293b0f5ebfba7a46b9f14,PodSandboxId:6a793c0aa0569cdec519d3e4db356bf00a6f008c3d4825fc63c467b421b96247,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[
string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1758324988840674551,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-304197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d5c2a791d209fbb8e019f27ce69c24a,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c7b58f5c9dbb6070a2649d925cf0f5e747933442ed0c1077d4e04d65cde0aa5,PodSandboxId:ee9c4e628431cbb4790bcad695abc86d7bad9b9d0133901d3f8e7af771bf2b5b,Metadata:&ContainerMetadata{Name:etcd,Attem
pt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1758324988805933722,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-304197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a53b848b476e69d25c1f04609257642b,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:599395f360461d56196f5088ecd3da31a52b32cd771d2c1745f745d0a8515293,PodSandboxId:68cb9daeb17d4acdf5a
36f6b60a2967df65ea3cc7501a6efe96b06776ecd4bbb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1758324988741030233,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-304197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aaf7793ef540d751420bc805bb28b292,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8444,\"containerPort\":8444,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod:
30,},},},}" file="otel-collector/interceptors.go:74" id=05115af8-6e9f-4939-9ddc-4b183ce56423 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 23:45:47 default-k8s-diff-port-304197 crio[882]: time="2025-09-19 23:45:47.839959471Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=3ac8c7f7-2d90-4ec7-894d-c112172a2697 name=/runtime.v1.RuntimeService/Status
	Sep 19 23:45:47 default-k8s-diff-port-304197 crio[882]: time="2025-09-19 23:45:47.840029258Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=3ac8c7f7-2d90-4ec7-894d-c112172a2697 name=/runtime.v1.RuntimeService/Status
	Sep 19 23:45:47 default-k8s-diff-port-304197 crio[882]: time="2025-09-19 23:45:47.869492399Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a9210c56-5219-4e06-a02c-c28ffd9494c4 name=/runtime.v1.RuntimeService/Version
	Sep 19 23:45:47 default-k8s-diff-port-304197 crio[882]: time="2025-09-19 23:45:47.869608078Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a9210c56-5219-4e06-a02c-c28ffd9494c4 name=/runtime.v1.RuntimeService/Version
	Sep 19 23:45:47 default-k8s-diff-port-304197 crio[882]: time="2025-09-19 23:45:47.871733105Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=15f19755-f946-4514-a69f-ed8932a985b6 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 23:45:47 default-k8s-diff-port-304197 crio[882]: time="2025-09-19 23:45:47.872424414Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758325547872397329,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176956,},InodesUsed:&UInt64Value{Value:65,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=15f19755-f946-4514-a69f-ed8932a985b6 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 23:45:47 default-k8s-diff-port-304197 crio[882]: time="2025-09-19 23:45:47.873370981Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=75f6a74a-5852-4c18-bc37-34fdb8fa61d9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 23:45:47 default-k8s-diff-port-304197 crio[882]: time="2025-09-19 23:45:47.873595276Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=75f6a74a-5852-4c18-bc37-34fdb8fa61d9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 23:45:47 default-k8s-diff-port-304197 crio[882]: time="2025-09-19 23:45:47.873830311Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d535d833bf717efb956cf2333c1b066d736288d6430c819636c8898614705cba,PodSandboxId:7dc46f2e1c62053024540103a673c22fa46949c4bd447d6861b210bb30b60eff,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:6,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1758325353635515732,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-6ffb444bf9-mwz54,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 911a6b3c-3441-40a6-ac54-04cf424c179b,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.k
ubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ec2522d96a5ed9e24e9ecf32e0d0c252795dcd9c98c44a5b6a16b9ea4a1a9e2,PodSandboxId:206391a3986c3bb63b39cf935fc260351d4023074f7b7aba696e1594757a1cef,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1758325038465690059,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 874b675f-ecbe-4052-a6fb-bc7a6028db03,},Annotations
:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:375a7d2fd8f93e60f9c3abc0d93eabc7ac7f390406b748cd91b0f70fa45e969d,PodSandboxId:1b559b4a4e82ced80c6bfe6ad4227f9e84668422a12a82317372e8b4f4c11dab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1758325025436851219,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3321717a-b901-415e-b199-977471c0ff1f,},Annotations:map[string]
string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34682f7775d4f0930f8412d24f895144c1151ebff6abf2b263cad9f204138b14,PodSandboxId:1de7d351999a23a7f5726463eb34fd9c478fec111f541435517cd97a2cc3a5a5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1758325001966050992,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-qxgj9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6340754-da46-4e31-9f54-feec6a797beb,},Annotations:map[string]string{io.kubernetes.container.
hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74a89ff1d5c44294b0a6b690b20f0b8f8fc88e97e56917b2944fb036c040abdb,PodSandboxId:da97ab890a843eb13424019f4a031b251cba4d13263e684590ede4a3203ac1b7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b
97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1758324994368240805,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hr2bk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02b8b6af-3927-4e0c-a567-28aca5e8cd79,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3436dea0618688158ab7ae5e858bc14d034997b2f4f38b97b32b4274515a49ce,PodSandboxId:1b559b4a4e82ced80c6bfe6ad4227f9e84668422a12a82317372e8b4f4c11dab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_EXITED,CreatedAt:1758324994446061048,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3321717a-b901-415e-b199-977471c0ff1f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e5e899743d432ddcb109979a1ab6e43a88582b10f19fce066e46d5777c20fd2,PodSandboxId:f773b827713b4e5ececda93e3a2c843c58d602181aa64c2ea25ab84b0029e3eb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTA
INER_RUNNING,CreatedAt:1758324988850025144,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-304197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b8cf1138d801c35b4e4cb07a863160e,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b446ba7b5e7f19fe9fc931d0c6005d9015217d26588293b0f5ebfba7a46b9f14,PodSandboxId:6a793c0aa0569cdec519d3e4db356bf00a6f008c3d4825fc63c467b421b96247,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[
string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1758324988840674551,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-304197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d5c2a791d209fbb8e019f27ce69c24a,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c7b58f5c9dbb6070a2649d925cf0f5e747933442ed0c1077d4e04d65cde0aa5,PodSandboxId:ee9c4e628431cbb4790bcad695abc86d7bad9b9d0133901d3f8e7af771bf2b5b,Metadata:&ContainerMetadata{Name:etcd,Attem
pt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1758324988805933722,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-304197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a53b848b476e69d25c1f04609257642b,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:599395f360461d56196f5088ecd3da31a52b32cd771d2c1745f745d0a8515293,PodSandboxId:68cb9daeb17d4acdf5a
36f6b60a2967df65ea3cc7501a6efe96b06776ecd4bbb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1758324988741030233,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-304197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aaf7793ef540d751420bc805bb28b292,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8444,\"containerPort\":8444,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod:
30,},},},}" file="otel-collector/interceptors.go:74" id=75f6a74a-5852-4c18-bc37-34fdb8fa61d9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 23:45:47 default-k8s-diff-port-304197 crio[882]: time="2025-09-19 23:45:47.911432942Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=68d399b3-33fb-4363-9eff-303034263236 name=/runtime.v1.RuntimeService/Version
	Sep 19 23:45:47 default-k8s-diff-port-304197 crio[882]: time="2025-09-19 23:45:47.911615241Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=68d399b3-33fb-4363-9eff-303034263236 name=/runtime.v1.RuntimeService/Version
	Sep 19 23:45:47 default-k8s-diff-port-304197 crio[882]: time="2025-09-19 23:45:47.916443296Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6e0e064d-b8d2-4518-b940-e27010423fad name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 23:45:47 default-k8s-diff-port-304197 crio[882]: time="2025-09-19 23:45:47.916955095Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758325547916928152,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176956,},InodesUsed:&UInt64Value{Value:65,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6e0e064d-b8d2-4518-b940-e27010423fad name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 23:45:47 default-k8s-diff-port-304197 crio[882]: time="2025-09-19 23:45:47.917589743Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c0fb4166-42c1-4573-b3d4-387a77b38553 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 23:45:47 default-k8s-diff-port-304197 crio[882]: time="2025-09-19 23:45:47.917776235Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c0fb4166-42c1-4573-b3d4-387a77b38553 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 23:45:47 default-k8s-diff-port-304197 crio[882]: time="2025-09-19 23:45:47.918063949Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d535d833bf717efb956cf2333c1b066d736288d6430c819636c8898614705cba,PodSandboxId:7dc46f2e1c62053024540103a673c22fa46949c4bd447d6861b210bb30b60eff,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:6,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1758325353635515732,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-6ffb444bf9-mwz54,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 911a6b3c-3441-40a6-ac54-04cf424c179b,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.k
ubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ec2522d96a5ed9e24e9ecf32e0d0c252795dcd9c98c44a5b6a16b9ea4a1a9e2,PodSandboxId:206391a3986c3bb63b39cf935fc260351d4023074f7b7aba696e1594757a1cef,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1758325038465690059,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 874b675f-ecbe-4052-a6fb-bc7a6028db03,},Annotations
:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:375a7d2fd8f93e60f9c3abc0d93eabc7ac7f390406b748cd91b0f70fa45e969d,PodSandboxId:1b559b4a4e82ced80c6bfe6ad4227f9e84668422a12a82317372e8b4f4c11dab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1758325025436851219,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3321717a-b901-415e-b199-977471c0ff1f,},Annotations:map[string]
string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34682f7775d4f0930f8412d24f895144c1151ebff6abf2b263cad9f204138b14,PodSandboxId:1de7d351999a23a7f5726463eb34fd9c478fec111f541435517cd97a2cc3a5a5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1758325001966050992,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-qxgj9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6340754-da46-4e31-9f54-feec6a797beb,},Annotations:map[string]string{io.kubernetes.container.
hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74a89ff1d5c44294b0a6b690b20f0b8f8fc88e97e56917b2944fb036c040abdb,PodSandboxId:da97ab890a843eb13424019f4a031b251cba4d13263e684590ede4a3203ac1b7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b
97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1758324994368240805,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hr2bk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02b8b6af-3927-4e0c-a567-28aca5e8cd79,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3436dea0618688158ab7ae5e858bc14d034997b2f4f38b97b32b4274515a49ce,PodSandboxId:1b559b4a4e82ced80c6bfe6ad4227f9e84668422a12a82317372e8b4f4c11dab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_EXITED,CreatedAt:1758324994446061048,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3321717a-b901-415e-b199-977471c0ff1f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e5e899743d432ddcb109979a1ab6e43a88582b10f19fce066e46d5777c20fd2,PodSandboxId:f773b827713b4e5ececda93e3a2c843c58d602181aa64c2ea25ab84b0029e3eb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTA
INER_RUNNING,CreatedAt:1758324988850025144,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-304197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b8cf1138d801c35b4e4cb07a863160e,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b446ba7b5e7f19fe9fc931d0c6005d9015217d26588293b0f5ebfba7a46b9f14,PodSandboxId:6a793c0aa0569cdec519d3e4db356bf00a6f008c3d4825fc63c467b421b96247,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[
string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1758324988840674551,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-304197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d5c2a791d209fbb8e019f27ce69c24a,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c7b58f5c9dbb6070a2649d925cf0f5e747933442ed0c1077d4e04d65cde0aa5,PodSandboxId:ee9c4e628431cbb4790bcad695abc86d7bad9b9d0133901d3f8e7af771bf2b5b,Metadata:&ContainerMetadata{Name:etcd,Attem
pt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1758324988805933722,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-304197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a53b848b476e69d25c1f04609257642b,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:599395f360461d56196f5088ecd3da31a52b32cd771d2c1745f745d0a8515293,PodSandboxId:68cb9daeb17d4acdf5a
36f6b60a2967df65ea3cc7501a6efe96b06776ecd4bbb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1758324988741030233,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-304197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aaf7793ef540d751420bc805bb28b292,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8444,\"containerPort\":8444,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod:
30,},},},}" file="otel-collector/interceptors.go:74" id=c0fb4166-42c1-4573-b3d4-387a77b38553 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 23:45:47 default-k8s-diff-port-304197 crio[882]: time="2025-09-19 23:45:47.966593371Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5fd3656d-8d39-4394-bfd8-6c555e4696bc name=/runtime.v1.RuntimeService/Version
	Sep 19 23:45:47 default-k8s-diff-port-304197 crio[882]: time="2025-09-19 23:45:47.966680006Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5fd3656d-8d39-4394-bfd8-6c555e4696bc name=/runtime.v1.RuntimeService/Version
	Sep 19 23:45:47 default-k8s-diff-port-304197 crio[882]: time="2025-09-19 23:45:47.970487134Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f5970bae-db09-4273-8283-1de5fc4d3bbe name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 23:45:47 default-k8s-diff-port-304197 crio[882]: time="2025-09-19 23:45:47.971484419Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758325547971457380,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176956,},InodesUsed:&UInt64Value{Value:65,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f5970bae-db09-4273-8283-1de5fc4d3bbe name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 23:45:47 default-k8s-diff-port-304197 crio[882]: time="2025-09-19 23:45:47.973857612Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ad887727-1bba-4a28-878f-f897d284a515 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 23:45:47 default-k8s-diff-port-304197 crio[882]: time="2025-09-19 23:45:47.973941122Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ad887727-1bba-4a28-878f-f897d284a515 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 23:45:47 default-k8s-diff-port-304197 crio[882]: time="2025-09-19 23:45:47.974839005Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d535d833bf717efb956cf2333c1b066d736288d6430c819636c8898614705cba,PodSandboxId:7dc46f2e1c62053024540103a673c22fa46949c4bd447d6861b210bb30b60eff,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:6,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1758325353635515732,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-6ffb444bf9-mwz54,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 911a6b3c-3441-40a6-ac54-04cf424c179b,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.k
ubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ec2522d96a5ed9e24e9ecf32e0d0c252795dcd9c98c44a5b6a16b9ea4a1a9e2,PodSandboxId:206391a3986c3bb63b39cf935fc260351d4023074f7b7aba696e1594757a1cef,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1758325038465690059,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 874b675f-ecbe-4052-a6fb-bc7a6028db03,},Annotations
:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:375a7d2fd8f93e60f9c3abc0d93eabc7ac7f390406b748cd91b0f70fa45e969d,PodSandboxId:1b559b4a4e82ced80c6bfe6ad4227f9e84668422a12a82317372e8b4f4c11dab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1758325025436851219,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3321717a-b901-415e-b199-977471c0ff1f,},Annotations:map[string]
string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34682f7775d4f0930f8412d24f895144c1151ebff6abf2b263cad9f204138b14,PodSandboxId:1de7d351999a23a7f5726463eb34fd9c478fec111f541435517cd97a2cc3a5a5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1758325001966050992,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-qxgj9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6340754-da46-4e31-9f54-feec6a797beb,},Annotations:map[string]string{io.kubernetes.container.
hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74a89ff1d5c44294b0a6b690b20f0b8f8fc88e97e56917b2944fb036c040abdb,PodSandboxId:da97ab890a843eb13424019f4a031b251cba4d13263e684590ede4a3203ac1b7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b
97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1758324994368240805,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hr2bk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02b8b6af-3927-4e0c-a567-28aca5e8cd79,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3436dea0618688158ab7ae5e858bc14d034997b2f4f38b97b32b4274515a49ce,PodSandboxId:1b559b4a4e82ced80c6bfe6ad4227f9e84668422a12a82317372e8b4f4c11dab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_EXITED,CreatedAt:1758324994446061048,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3321717a-b901-415e-b199-977471c0ff1f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e5e899743d432ddcb109979a1ab6e43a88582b10f19fce066e46d5777c20fd2,PodSandboxId:f773b827713b4e5ececda93e3a2c843c58d602181aa64c2ea25ab84b0029e3eb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTA
INER_RUNNING,CreatedAt:1758324988850025144,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-304197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b8cf1138d801c35b4e4cb07a863160e,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b446ba7b5e7f19fe9fc931d0c6005d9015217d26588293b0f5ebfba7a46b9f14,PodSandboxId:6a793c0aa0569cdec519d3e4db356bf00a6f008c3d4825fc63c467b421b96247,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[
string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1758324988840674551,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-304197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d5c2a791d209fbb8e019f27ce69c24a,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c7b58f5c9dbb6070a2649d925cf0f5e747933442ed0c1077d4e04d65cde0aa5,PodSandboxId:ee9c4e628431cbb4790bcad695abc86d7bad9b9d0133901d3f8e7af771bf2b5b,Metadata:&ContainerMetadata{Name:etcd,Attem
pt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1758324988805933722,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-304197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a53b848b476e69d25c1f04609257642b,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:599395f360461d56196f5088ecd3da31a52b32cd771d2c1745f745d0a8515293,PodSandboxId:68cb9daeb17d4acdf5a
36f6b60a2967df65ea3cc7501a6efe96b06776ecd4bbb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1758324988741030233,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-304197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aaf7793ef540d751420bc805bb28b292,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8444,\"containerPort\":8444,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod:
30,},},},}" file="otel-collector/interceptors.go:74" id=ad887727-1bba-4a28-878f-f897d284a515 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	d535d833bf717       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                      3 minutes ago       Exited              dashboard-metrics-scraper   6                   7dc46f2e1c620       dashboard-metrics-scraper-6ffb444bf9-mwz54
	6ec2522d96a5e       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   8 minutes ago       Running             busybox                     1                   206391a3986c3       busybox
	375a7d2fd8f93       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago       Running             storage-provisioner         3                   1b559b4a4e82c       storage-provisioner
	34682f7775d4f       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      9 minutes ago       Running             coredns                     1                   1de7d351999a2       coredns-66bc5c9577-qxgj9
	3436dea061868       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      9 minutes ago       Exited              storage-provisioner         2                   1b559b4a4e82c       storage-provisioner
	74a89ff1d5c44       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                      9 minutes ago       Running             kube-proxy                  1                   da97ab890a843       kube-proxy-hr2bk
	8e5e899743d43       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                      9 minutes ago       Running             kube-controller-manager     1                   f773b827713b4       kube-controller-manager-default-k8s-diff-port-304197
	b446ba7b5e7f1       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                      9 minutes ago       Running             kube-scheduler              1                   6a793c0aa0569       kube-scheduler-default-k8s-diff-port-304197
	2c7b58f5c9dbb       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      9 minutes ago       Running             etcd                        1                   ee9c4e628431c       etcd-default-k8s-diff-port-304197
	599395f360461       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                      9 minutes ago       Running             kube-apiserver              1                   68cb9daeb17d4       kube-apiserver-default-k8s-diff-port-304197
	
	
	==> coredns [34682f7775d4f0930f8412d24f895144c1151ebff6abf2b263cad9f204138b14] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42182 - 16622 "HINFO IN 6384841599668461451.6666792259869803210. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.06272829s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-304197
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-304197
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=default-k8s-diff-port-304197
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_19T23_33_14_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 23:33:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-304197
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 23:45:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 23:43:10 +0000   Fri, 19 Sep 2025 23:33:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 23:43:10 +0000   Fri, 19 Sep 2025 23:33:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 23:43:10 +0000   Fri, 19 Sep 2025 23:33:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 23:43:10 +0000   Fri, 19 Sep 2025 23:36:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.80
	  Hostname:    default-k8s-diff-port-304197
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 3bba7fc49fe94e02803c493aa434f4d2
	  System UUID:                3bba7fc4-9fe9-4e02-803c-493aa434f4d2
	  Boot ID:                    4b56d174-1dea-46ec-8824-157e27d5086d
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-66bc5c9577-qxgj9                                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     12m
	  kube-system                 etcd-default-k8s-diff-port-304197                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         12m
	  kube-system                 kube-apiserver-default-k8s-diff-port-304197             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-304197    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-hr2bk                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-default-k8s-diff-port-304197             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 metrics-server-746fcd58dc-7rhgt                         100m (5%)     0 (0%)      200Mi (6%)       0 (0%)         11m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-mwz54              0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m8s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-dscz6                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (12%)  170Mi (5%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   Starting                 9m13s                  kube-proxy       
	  Normal   Starting                 12m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node default-k8s-diff-port-304197 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node default-k8s-diff-port-304197 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x7 over 12m)      kubelet          Node default-k8s-diff-port-304197 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    12m                    kubelet          Node default-k8s-diff-port-304197 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m                    kubelet          Node default-k8s-diff-port-304197 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     12m                    kubelet          Node default-k8s-diff-port-304197 status is now: NodeHasSufficientPID
	  Normal   Starting                 12m                    kubelet          Starting kubelet.
	  Normal   NodeReady                12m                    kubelet          Node default-k8s-diff-port-304197 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node default-k8s-diff-port-304197 event: Registered Node default-k8s-diff-port-304197 in Controller
	  Normal   Starting                 9m21s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  9m21s (x8 over 9m21s)  kubelet          Node default-k8s-diff-port-304197 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m21s (x8 over 9m21s)  kubelet          Node default-k8s-diff-port-304197 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m21s (x7 over 9m21s)  kubelet          Node default-k8s-diff-port-304197 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  9m21s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 9m15s                  kubelet          Node default-k8s-diff-port-304197 has been rebooted, boot id: 4b56d174-1dea-46ec-8824-157e27d5086d
	  Normal   RegisteredNode           9m10s                  node-controller  Node default-k8s-diff-port-304197 event: Registered Node default-k8s-diff-port-304197 in Controller
	
	
	==> dmesg <==
	[Sep19 23:36] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000049] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.000768] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.831113] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000018] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.161296] kauditd_printk_skb: 60 callbacks suppressed
	[  +0.141672] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.686796] kauditd_printk_skb: 168 callbacks suppressed
	[  +0.347923] kauditd_printk_skb: 161 callbacks suppressed
	[  +2.850380] kauditd_printk_skb: 182 callbacks suppressed
	[Sep19 23:37] kauditd_printk_skb: 56 callbacks suppressed
	[ +12.043446] kauditd_printk_skb: 21 callbacks suppressed
	[ +14.180570] kauditd_printk_skb: 11 callbacks suppressed
	[Sep19 23:38] kauditd_printk_skb: 6 callbacks suppressed
	[Sep19 23:39] kauditd_printk_skb: 6 callbacks suppressed
	[Sep19 23:42] kauditd_printk_skb: 6 callbacks suppressed
	
	
	==> etcd [2c7b58f5c9dbb6070a2649d925cf0f5e747933442ed0c1077d4e04d65cde0aa5] <==
	{"level":"warn","ts":"2025-09-19T23:36:31.167416Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:36:31.188348Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:36:31.200467Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:36:31.229155Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:36:31.239532Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:36:31.257457Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:36:31.277406Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:36:31.293390Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:36:31.345391Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:36:31.370137Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:36:31.394072Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:36:31.417504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:36:31.442683Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:36:31.462824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:36:31.481089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:36:31.504181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:36:31.521012Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:36:31.544462Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:36:31.575234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:36:31.610685Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:36:31.637822Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:36:31.784014Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:36:42.192004Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"131.415877ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-qxgj9\" limit:1 ","response":"range_response_count:1 size:5464"}
	{"level":"info","ts":"2025-09-19T23:36:42.192117Z","caller":"traceutil/trace.go:172","msg":"trace[33012178] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-qxgj9; range_end:; response_count:1; response_revision:674; }","duration":"131.564339ms","start":"2025-09-19T23:36:42.060540Z","end":"2025-09-19T23:36:42.192105Z","steps":["trace[33012178] 'range keys from in-memory index tree'  (duration: 131.025228ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-19T23:36:44.628632Z","caller":"traceutil/trace.go:172","msg":"trace[1541362267] transaction","detail":"{read_only:false; response_revision:693; number_of_response:1; }","duration":"141.674497ms","start":"2025-09-19T23:36:44.486944Z","end":"2025-09-19T23:36:44.628619Z","steps":["trace[1541362267] 'process raft request'  (duration: 141.549801ms)"],"step_count":1}
	
	
	==> kernel <==
	 23:45:48 up 9 min,  0 users,  load average: 0.27, 0.37, 0.20
	Linux default-k8s-diff-port-304197 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Sep  9 02:24:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [599395f360461d56196f5088ecd3da31a52b32cd771d2c1745f745d0a8515293] <==
	I0919 23:41:37.133403       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 23:42:33.653697       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0919 23:42:33.950595       1 handler_proxy.go:99] no RequestInfo found in the context
	E0919 23:42:33.950639       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0919 23:42:33.950653       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0919 23:42:33.952781       1 handler_proxy.go:99] no RequestInfo found in the context
	E0919 23:42:33.952826       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0919 23:42:33.952834       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0919 23:42:45.061757       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 23:44:01.778174       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 23:44:09.622271       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0919 23:44:33.951709       1 handler_proxy.go:99] no RequestInfo found in the context
	E0919 23:44:33.951781       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0919 23:44:33.951794       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0919 23:44:33.952931       1 handler_proxy.go:99] no RequestInfo found in the context
	E0919 23:44:33.953016       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0919 23:44:33.953030       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0919 23:45:09.864137       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 23:45:13.040397       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [8e5e899743d432ddcb109979a1ab6e43a88582b10f19fce066e46d5777c20fd2] <==
	I0919 23:39:38.544750       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0919 23:40:08.464269       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0919 23:40:08.553182       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0919 23:40:38.469238       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0919 23:40:38.562668       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0919 23:41:08.474268       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0919 23:41:08.571705       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0919 23:41:38.480944       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0919 23:41:38.580575       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0919 23:42:08.486100       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0919 23:42:08.589117       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0919 23:42:38.491871       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0919 23:42:38.599409       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0919 23:43:08.497812       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0919 23:43:08.611578       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0919 23:43:38.503231       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0919 23:43:38.622160       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0919 23:44:08.509107       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0919 23:44:08.630829       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0919 23:44:38.514676       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0919 23:44:38.639980       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0919 23:45:08.519973       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0919 23:45:08.648817       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0919 23:45:38.525048       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0919 23:45:38.657601       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [74a89ff1d5c44294b0a6b690b20f0b8f8fc88e97e56917b2944fb036c040abdb] <==
	I0919 23:36:34.923447       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0919 23:36:35.024261       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0919 23:36:35.024564       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.80"]
	E0919 23:36:35.025502       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 23:36:35.078609       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0919 23:36:35.078750       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0919 23:36:35.078847       1 server_linux.go:132] "Using iptables Proxier"
	I0919 23:36:35.094144       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 23:36:35.094604       1 server.go:527] "Version info" version="v1.34.0"
	I0919 23:36:35.094654       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 23:36:35.100126       1 config.go:200] "Starting service config controller"
	I0919 23:36:35.100194       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0919 23:36:35.100227       1 config.go:106] "Starting endpoint slice config controller"
	I0919 23:36:35.100241       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0919 23:36:35.100261       1 config.go:403] "Starting serviceCIDR config controller"
	I0919 23:36:35.100353       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0919 23:36:35.107078       1 config.go:309] "Starting node config controller"
	I0919 23:36:35.107588       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0919 23:36:35.107708       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0919 23:36:35.200954       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0919 23:36:35.200955       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0919 23:36:35.200978       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [b446ba7b5e7f19fe9fc931d0c6005d9015217d26588293b0f5ebfba7a46b9f14] <==
	I0919 23:36:32.885210       1 serving.go:386] Generated self-signed cert in-memory
	I0919 23:36:34.500260       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0919 23:36:34.504506       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 23:36:34.516260       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0919 23:36:34.516864       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0919 23:36:34.516963       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I0919 23:36:34.517024       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0919 23:36:34.518422       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 23:36:34.520357       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 23:36:34.520456       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0919 23:36:34.520469       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0919 23:36:34.617452       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I0919 23:36:34.621011       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0919 23:36:34.621075       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 19 23:44:57 default-k8s-diff-port-304197 kubelet[1212]: E0919 23:44:57.734435    1212 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758325497733938194  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Sep 19 23:44:57 default-k8s-diff-port-304197 kubelet[1212]: E0919 23:44:57.734485    1212 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758325497733938194  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Sep 19 23:44:59 default-k8s-diff-port-304197 kubelet[1212]: I0919 23:44:59.617490    1212 scope.go:117] "RemoveContainer" containerID="d535d833bf717efb956cf2333c1b066d736288d6430c819636c8898614705cba"
	Sep 19 23:44:59 default-k8s-diff-port-304197 kubelet[1212]: E0919 23:44:59.617812    1212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-mwz54_kubernetes-dashboard(911a6b3c-3441-40a6-ac54-04cf424c179b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mwz54" podUID="911a6b3c-3441-40a6-ac54-04cf424c179b"
	Sep 19 23:45:07 default-k8s-diff-port-304197 kubelet[1212]: E0919 23:45:07.737264    1212 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758325507736372909  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Sep 19 23:45:07 default-k8s-diff-port-304197 kubelet[1212]: E0919 23:45:07.737585    1212 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758325507736372909  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Sep 19 23:45:09 default-k8s-diff-port-304197 kubelet[1212]: E0919 23:45:09.618845    1212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dscz6" podUID="92d8e2bb-d9b6-4e61-8313-c3c386feb5dd"
	Sep 19 23:45:11 default-k8s-diff-port-304197 kubelet[1212]: E0919 23:45:11.620140    1212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-7rhgt" podUID="64e629d6-5d4b-49e5-ac73-5a67b6f877b4"
	Sep 19 23:45:14 default-k8s-diff-port-304197 kubelet[1212]: I0919 23:45:14.616882    1212 scope.go:117] "RemoveContainer" containerID="d535d833bf717efb956cf2333c1b066d736288d6430c819636c8898614705cba"
	Sep 19 23:45:14 default-k8s-diff-port-304197 kubelet[1212]: E0919 23:45:14.617055    1212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-mwz54_kubernetes-dashboard(911a6b3c-3441-40a6-ac54-04cf424c179b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mwz54" podUID="911a6b3c-3441-40a6-ac54-04cf424c179b"
	Sep 19 23:45:17 default-k8s-diff-port-304197 kubelet[1212]: E0919 23:45:17.739243    1212 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758325517738890881  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Sep 19 23:45:17 default-k8s-diff-port-304197 kubelet[1212]: E0919 23:45:17.739346    1212 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758325517738890881  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Sep 19 23:45:23 default-k8s-diff-port-304197 kubelet[1212]: E0919 23:45:23.619181    1212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dscz6" podUID="92d8e2bb-d9b6-4e61-8313-c3c386feb5dd"
	Sep 19 23:45:25 default-k8s-diff-port-304197 kubelet[1212]: E0919 23:45:25.617415    1212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-7rhgt" podUID="64e629d6-5d4b-49e5-ac73-5a67b6f877b4"
	Sep 19 23:45:26 default-k8s-diff-port-304197 kubelet[1212]: I0919 23:45:26.616747    1212 scope.go:117] "RemoveContainer" containerID="d535d833bf717efb956cf2333c1b066d736288d6430c819636c8898614705cba"
	Sep 19 23:45:26 default-k8s-diff-port-304197 kubelet[1212]: E0919 23:45:26.616920    1212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-mwz54_kubernetes-dashboard(911a6b3c-3441-40a6-ac54-04cf424c179b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mwz54" podUID="911a6b3c-3441-40a6-ac54-04cf424c179b"
	Sep 19 23:45:27 default-k8s-diff-port-304197 kubelet[1212]: E0919 23:45:27.740914    1212 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758325527740446456  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Sep 19 23:45:27 default-k8s-diff-port-304197 kubelet[1212]: E0919 23:45:27.740958    1212 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758325527740446456  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Sep 19 23:45:36 default-k8s-diff-port-304197 kubelet[1212]: E0919 23:45:36.617741    1212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-7rhgt" podUID="64e629d6-5d4b-49e5-ac73-5a67b6f877b4"
	Sep 19 23:45:37 default-k8s-diff-port-304197 kubelet[1212]: E0919 23:45:37.743020    1212 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758325537742359367  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Sep 19 23:45:37 default-k8s-diff-port-304197 kubelet[1212]: E0919 23:45:37.743060    1212 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758325537742359367  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Sep 19 23:45:38 default-k8s-diff-port-304197 kubelet[1212]: I0919 23:45:38.616234    1212 scope.go:117] "RemoveContainer" containerID="d535d833bf717efb956cf2333c1b066d736288d6430c819636c8898614705cba"
	Sep 19 23:45:38 default-k8s-diff-port-304197 kubelet[1212]: E0919 23:45:38.616489    1212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-mwz54_kubernetes-dashboard(911a6b3c-3441-40a6-ac54-04cf424c179b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mwz54" podUID="911a6b3c-3441-40a6-ac54-04cf424c179b"
	Sep 19 23:45:47 default-k8s-diff-port-304197 kubelet[1212]: E0919 23:45:47.746196    1212 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758325547744923951  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Sep 19 23:45:47 default-k8s-diff-port-304197 kubelet[1212]: E0919 23:45:47.746223    1212 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758325547744923951  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	
	
	==> storage-provisioner [3436dea0618688158ab7ae5e858bc14d034997b2f4f38b97b32b4274515a49ce] <==
	I0919 23:36:34.715204       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0919 23:37:04.728500       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [375a7d2fd8f93e60f9c3abc0d93eabc7ac7f390406b748cd91b0f70fa45e969d] <==
	W0919 23:45:23.675353       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:45:25.679554       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:45:25.684488       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:45:27.687788       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:45:27.694399       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:45:29.700004       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:45:29.707116       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:45:31.711118       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:45:31.720541       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:45:33.723394       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:45:33.730120       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:45:35.733006       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:45:35.742637       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:45:37.749243       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:45:37.755111       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:45:39.758684       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:45:39.767841       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:45:41.770880       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:45:41.775789       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:45:43.779686       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:45:43.785701       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:45:45.789973       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:45:45.795938       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:45:47.799679       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:45:47.808498       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-304197 -n default-k8s-diff-port-304197
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-304197 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-7rhgt kubernetes-dashboard-855c9754f9-dscz6
helpers_test.go:282: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context default-k8s-diff-port-304197 describe pod metrics-server-746fcd58dc-7rhgt kubernetes-dashboard-855c9754f9-dscz6
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-304197 describe pod metrics-server-746fcd58dc-7rhgt kubernetes-dashboard-855c9754f9-dscz6: exit status 1 (60.197109ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-7rhgt" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-dscz6" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context default-k8s-diff-port-304197 describe pod metrics-server-746fcd58dc-7rhgt kubernetes-dashboard-855c9754f9-dscz6: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (542.68s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-dscz6" [92d8e2bb-d9b6-4e61-8313-c3c386feb5dd] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0919 23:45:50.532052   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:46:07.481018   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/custom-flannel-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:46:21.461075   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/functional-351278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:46:35.183165   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/custom-flannel-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:46:36.655799   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/addons-266998/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:46:38.005941   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/enable-default-cni-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:46:50.250847   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/flannel-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:47:05.708715   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/enable-default-cni-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:47:17.952529   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/flannel-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:48:06.673124   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:48:34.373950   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:49:23.629722   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/kindnet-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:49:24.432238   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/auto-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:49:47.423421   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/old-k8s-version-551579/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:49:51.432692   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/calico-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:49:58.387790   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/functional-351278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:50:17.853633   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/no-preload-065517/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:51:07.480951   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/custom-flannel-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:51:10.492780   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/old-k8s-version-551579/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:51:36.655685   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/addons-266998/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:51:38.006897   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/enable-default-cni-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:51:40.920030   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/no-preload-065517/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:51:50.250251   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/flannel-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:53:06.672844   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:54:23.629229   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/kindnet-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:54:24.431945   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/auto-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:54:47.423651   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/old-k8s-version-551579/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:285: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-304197 -n default-k8s-diff-port-304197
start_stop_delete_test.go:285: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2025-09-19 23:54:49.418636271 +0000 UTC m=+6044.816610466
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context default-k8s-diff-port-304197 describe po kubernetes-dashboard-855c9754f9-dscz6 -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) kubectl --context default-k8s-diff-port-304197 describe po kubernetes-dashboard-855c9754f9-dscz6 -n kubernetes-dashboard:
Name:             kubernetes-dashboard-855c9754f9-dscz6
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             default-k8s-diff-port-304197/192.168.39.80
Start Time:       Fri, 19 Sep 2025 23:36:40 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=855c9754f9
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:           10.244.0.7
Controlled By:  ReplicaSet/kubernetes-dashboard-855c9754f9
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kvpxm (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-kvpxm:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  18m                   default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dscz6 to default-k8s-diff-port-304197
Warning  Failed     15m                   kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    12m (x5 over 18m)     kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     12m (x4 over 17m)     kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": fetching target platform image selected from manifest list: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     12m (x5 over 17m)     kubelet            Error: ErrImagePull
Normal   BackOff    2m48s (x46 over 17m)  kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     2m22s (x48 over 17m)  kubelet            Error: ImagePullBackOff
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context default-k8s-diff-port-304197 logs kubernetes-dashboard-855c9754f9-dscz6 -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-304197 logs kubernetes-dashboard-855c9754f9-dscz6 -n kubernetes-dashboard: exit status 1 (74.188538ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-855c9754f9-dscz6" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:285: kubectl --context default-k8s-diff-port-304197 logs kubernetes-dashboard-855c9754f9-dscz6 -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-304197 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-304197 -n default-k8s-diff-port-304197
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-304197 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-304197 logs -n 25: (1.401442315s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────┬───────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                      ARGS                                      │    PROFILE    │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────┼───────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-024908 sudo iptables -t nat -L -n -v                                 │ bridge-024908 │ jenkins │ v1.37.0 │ 19 Sep 25 23:38 UTC │ 19 Sep 25 23:38 UTC │
	│ ssh     │ -p bridge-024908 sudo systemctl status kubelet --all --full --no-pager         │ bridge-024908 │ jenkins │ v1.37.0 │ 19 Sep 25 23:38 UTC │ 19 Sep 25 23:38 UTC │
	│ ssh     │ -p bridge-024908 sudo systemctl cat kubelet --no-pager                         │ bridge-024908 │ jenkins │ v1.37.0 │ 19 Sep 25 23:38 UTC │ 19 Sep 25 23:38 UTC │
	│ ssh     │ -p bridge-024908 sudo journalctl -xeu kubelet --all --full --no-pager          │ bridge-024908 │ jenkins │ v1.37.0 │ 19 Sep 25 23:38 UTC │ 19 Sep 25 23:38 UTC │
	│ ssh     │ -p bridge-024908 sudo cat /etc/kubernetes/kubelet.conf                         │ bridge-024908 │ jenkins │ v1.37.0 │ 19 Sep 25 23:38 UTC │ 19 Sep 25 23:38 UTC │
	│ ssh     │ -p bridge-024908 sudo cat /var/lib/kubelet/config.yaml                         │ bridge-024908 │ jenkins │ v1.37.0 │ 19 Sep 25 23:38 UTC │ 19 Sep 25 23:38 UTC │
	│ ssh     │ -p bridge-024908 sudo systemctl status docker --all --full --no-pager          │ bridge-024908 │ jenkins │ v1.37.0 │ 19 Sep 25 23:38 UTC │                     │
	│ ssh     │ -p bridge-024908 sudo systemctl cat docker --no-pager                          │ bridge-024908 │ jenkins │ v1.37.0 │ 19 Sep 25 23:38 UTC │ 19 Sep 25 23:38 UTC │
	│ ssh     │ -p bridge-024908 sudo cat /etc/docker/daemon.json                              │ bridge-024908 │ jenkins │ v1.37.0 │ 19 Sep 25 23:38 UTC │ 19 Sep 25 23:38 UTC │
	│ ssh     │ -p bridge-024908 sudo docker system info                                       │ bridge-024908 │ jenkins │ v1.37.0 │ 19 Sep 25 23:38 UTC │                     │
	│ ssh     │ -p bridge-024908 sudo systemctl status cri-docker --all --full --no-pager      │ bridge-024908 │ jenkins │ v1.37.0 │ 19 Sep 25 23:38 UTC │                     │
	│ ssh     │ -p bridge-024908 sudo systemctl cat cri-docker --no-pager                      │ bridge-024908 │ jenkins │ v1.37.0 │ 19 Sep 25 23:38 UTC │ 19 Sep 25 23:38 UTC │
	│ ssh     │ -p bridge-024908 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf │ bridge-024908 │ jenkins │ v1.37.0 │ 19 Sep 25 23:38 UTC │                     │
	│ ssh     │ -p bridge-024908 sudo cat /usr/lib/systemd/system/cri-docker.service           │ bridge-024908 │ jenkins │ v1.37.0 │ 19 Sep 25 23:38 UTC │ 19 Sep 25 23:38 UTC │
	│ ssh     │ -p bridge-024908 sudo cri-dockerd --version                                    │ bridge-024908 │ jenkins │ v1.37.0 │ 19 Sep 25 23:38 UTC │ 19 Sep 25 23:38 UTC │
	│ ssh     │ -p bridge-024908 sudo systemctl status containerd --all --full --no-pager      │ bridge-024908 │ jenkins │ v1.37.0 │ 19 Sep 25 23:38 UTC │                     │
	│ ssh     │ -p bridge-024908 sudo systemctl cat containerd --no-pager                      │ bridge-024908 │ jenkins │ v1.37.0 │ 19 Sep 25 23:38 UTC │ 19 Sep 25 23:38 UTC │
	│ ssh     │ -p bridge-024908 sudo cat /lib/systemd/system/containerd.service               │ bridge-024908 │ jenkins │ v1.37.0 │ 19 Sep 25 23:38 UTC │ 19 Sep 25 23:38 UTC │
	│ ssh     │ -p bridge-024908 sudo cat /etc/containerd/config.toml                          │ bridge-024908 │ jenkins │ v1.37.0 │ 19 Sep 25 23:38 UTC │ 19 Sep 25 23:38 UTC │
	│ ssh     │ -p bridge-024908 sudo containerd config dump                                   │ bridge-024908 │ jenkins │ v1.37.0 │ 19 Sep 25 23:38 UTC │ 19 Sep 25 23:38 UTC │
	│ ssh     │ -p bridge-024908 sudo systemctl status crio --all --full --no-pager            │ bridge-024908 │ jenkins │ v1.37.0 │ 19 Sep 25 23:38 UTC │ 19 Sep 25 23:38 UTC │
	│ ssh     │ -p bridge-024908 sudo systemctl cat crio --no-pager                            │ bridge-024908 │ jenkins │ v1.37.0 │ 19 Sep 25 23:38 UTC │ 19 Sep 25 23:38 UTC │
	│ ssh     │ -p bridge-024908 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;  │ bridge-024908 │ jenkins │ v1.37.0 │ 19 Sep 25 23:38 UTC │ 19 Sep 25 23:38 UTC │
	│ ssh     │ -p bridge-024908 sudo crio config                                              │ bridge-024908 │ jenkins │ v1.37.0 │ 19 Sep 25 23:38 UTC │ 19 Sep 25 23:38 UTC │
	│ delete  │ -p bridge-024908                                                               │ bridge-024908 │ jenkins │ v1.37.0 │ 19 Sep 25 23:38 UTC │ 19 Sep 25 23:38 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────┴───────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/19 23:36:38
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 23:36:38.514309   75550 out.go:360] Setting OutFile to fd 1 ...
	I0919 23:36:38.514617   75550 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 23:36:38.514630   75550 out.go:374] Setting ErrFile to fd 2...
	I0919 23:36:38.514638   75550 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 23:36:38.514987   75550 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14764/.minikube/bin
	I0919 23:36:38.515692   75550 out.go:368] Setting JSON to false
	I0919 23:36:38.517068   75550 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8325,"bootTime":1758316673,"procs":303,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 23:36:38.517143   75550 start.go:140] virtualization: kvm guest
	I0919 23:36:38.519365   75550 out.go:179] * [bridge-024908] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0919 23:36:38.520862   75550 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 23:36:38.520868   75550 notify.go:220] Checking for updates...
	I0919 23:36:38.523475   75550 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 23:36:38.524802   75550 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-14764/kubeconfig
	I0919 23:36:38.526039   75550 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14764/.minikube
	I0919 23:36:38.527638   75550 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 23:36:38.528915   75550 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 23:36:38.530830   75550 config.go:182] Loaded profile config "default-k8s-diff-port-304197": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 23:36:38.530968   75550 config.go:182] Loaded profile config "enable-default-cni-024908": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 23:36:38.531114   75550 config.go:182] Loaded profile config "flannel-024908": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 23:36:38.531247   75550 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 23:36:38.583429   75550 out.go:179] * Using the kvm2 driver based on user configuration
	I0919 23:36:38.584859   75550 start.go:304] selected driver: kvm2
	I0919 23:36:38.584876   75550 start.go:918] validating driver "kvm2" against <nil>
	I0919 23:36:38.584888   75550 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 23:36:38.585778   75550 install.go:51] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 23:36:38.585880   75550 install.go:123] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21594-14764/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0919 23:36:38.606707   75550 install.go:134] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is {Version:v1.37.0 Commit:6e37ee63f758843bb5fe33c3a528c564c4b83d53}
	I0919 23:36:38.606773   75550 install.go:123] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21594-14764/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0919 23:36:38.625076   75550 install.go:134] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is {Version:v1.37.0 Commit:6e37ee63f758843bb5fe33c3a528c564c4b83d53}
	I0919 23:36:38.625126   75550 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0919 23:36:38.625392   75550 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 23:36:38.625424   75550 cni.go:84] Creating CNI manager for "bridge"
	I0919 23:36:38.625431   75550 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0919 23:36:38.625506   75550 start.go:348] cluster config:
	{Name:bridge-024908 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:bridge-024908 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:
1m0s}
	I0919 23:36:38.625627   75550 iso.go:125] acquiring lock: {Name:mk21ede999fca7478b081d3e470ef3cc88b140f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 23:36:38.628622   75550 out.go:179] * Starting "bridge-024908" primary control-plane node in "bridge-024908" cluster
	I0919 23:36:38.630015   75550 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 23:36:38.630084   75550 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21594-14764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0919 23:36:38.630097   75550 cache.go:58] Caching tarball of preloaded images
	I0919 23:36:38.630290   75550 preload.go:172] Found /home/jenkins/minikube-integration/21594-14764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 23:36:38.630308   75550 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0919 23:36:38.630428   75550 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908/config.json ...
	I0919 23:36:38.630452   75550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908/config.json: {Name:mke6f75eee0e949757ac34942cba06e9beb4106a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:36:38.630642   75550 start.go:360] acquireMachinesLock for bridge-024908: {Name:mke6cd936cf5da66e4fbcd4dcd8a2d3d3cae6c7b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0919 23:36:38.630679   75550 start.go:364] duration metric: took 20.281µs to acquireMachinesLock for "bridge-024908"
	I0919 23:36:38.630705   75550 start.go:93] Provisioning new machine with config: &{Name:bridge-024908 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.37.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:bridge-024908 Namespa
ce:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 23:36:38.630809   75550 start.go:125] createHost starting for "" (driver="kvm2")
	I0919 23:36:35.915454   73258 node_ready.go:49] node "flannel-024908" is "Ready"
	I0919 23:36:35.915481   73258 node_ready.go:38] duration metric: took 6.005163719s for node "flannel-024908" to be "Ready" ...
	I0919 23:36:35.915494   73258 api_server.go:52] waiting for apiserver process to appear ...
	I0919 23:36:35.915547   73258 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 23:36:35.968546   73258 api_server.go:72] duration metric: took 7.171831903s to wait for apiserver process to appear ...
	I0919 23:36:35.968577   73258 api_server.go:88] waiting for apiserver healthz status ...
	I0919 23:36:35.968596   73258 api_server.go:253] Checking apiserver healthz at https://192.168.50.28:8443/healthz ...
	I0919 23:36:35.977779   73258 api_server.go:279] https://192.168.50.28:8443/healthz returned 200:
	ok
	I0919 23:36:35.979219   73258 api_server.go:141] control plane version: v1.34.0
	I0919 23:36:35.979246   73258 api_server.go:131] duration metric: took 10.661874ms to wait for apiserver health ...
	I0919 23:36:35.979256   73258 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 23:36:35.983890   73258 system_pods.go:59] 7 kube-system pods found
	I0919 23:36:35.983930   73258 system_pods.go:61] "coredns-66bc5c9577-6ff4s" [bfeeb5e4-29d3-4cac-a8d8-f50cea911379] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:36:35.983938   73258 system_pods.go:61] "etcd-flannel-024908" [750b89e8-a3d5-4298-8de2-0f8813e3f04f] Running
	I0919 23:36:35.983946   73258 system_pods.go:61] "kube-apiserver-flannel-024908" [6fd5e3be-9011-4355-ac6e-edce6a72430c] Running
	I0919 23:36:35.983952   73258 system_pods.go:61] "kube-controller-manager-flannel-024908" [991db4b7-f11c-492c-bb87-2291620f98c3] Running
	I0919 23:36:35.983965   73258 system_pods.go:61] "kube-proxy-5ch96" [1e8a306d-3338-4763-a77f-02cdca9beefa] Running
	I0919 23:36:35.983969   73258 system_pods.go:61] "kube-scheduler-flannel-024908" [4bde65b7-9774-486e-b36b-fd4f0f4ad913] Running
	I0919 23:36:35.983974   73258 system_pods.go:61] "storage-provisioner" [8e0b0c57-dc02-42f5-a76e-71f66be07956] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 23:36:35.983986   73258 system_pods.go:74] duration metric: took 4.72304ms to wait for pod list to return data ...
	I0919 23:36:35.984001   73258 default_sa.go:34] waiting for default service account to be created ...
	I0919 23:36:35.987409   73258 default_sa.go:45] found service account: "default"
	I0919 23:36:35.987437   73258 default_sa.go:55] duration metric: took 3.42862ms for default service account to be created ...
	I0919 23:36:35.987447   73258 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 23:36:35.991154   73258 system_pods.go:86] 7 kube-system pods found
	I0919 23:36:35.991189   73258 system_pods.go:89] "coredns-66bc5c9577-6ff4s" [bfeeb5e4-29d3-4cac-a8d8-f50cea911379] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:36:35.991196   73258 system_pods.go:89] "etcd-flannel-024908" [750b89e8-a3d5-4298-8de2-0f8813e3f04f] Running
	I0919 23:36:35.991206   73258 system_pods.go:89] "kube-apiserver-flannel-024908" [6fd5e3be-9011-4355-ac6e-edce6a72430c] Running
	I0919 23:36:35.991212   73258 system_pods.go:89] "kube-controller-manager-flannel-024908" [991db4b7-f11c-492c-bb87-2291620f98c3] Running
	I0919 23:36:35.991217   73258 system_pods.go:89] "kube-proxy-5ch96" [1e8a306d-3338-4763-a77f-02cdca9beefa] Running
	I0919 23:36:35.991222   73258 system_pods.go:89] "kube-scheduler-flannel-024908" [4bde65b7-9774-486e-b36b-fd4f0f4ad913] Running
	I0919 23:36:35.991233   73258 system_pods.go:89] "storage-provisioner" [8e0b0c57-dc02-42f5-a76e-71f66be07956] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 23:36:35.991258   73258 retry.go:31] will retry after 296.545363ms: missing components: kube-dns
	I0919 23:36:36.401326   73258 system_pods.go:86] 7 kube-system pods found
	I0919 23:36:36.401364   73258 system_pods.go:89] "coredns-66bc5c9577-6ff4s" [bfeeb5e4-29d3-4cac-a8d8-f50cea911379] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:36:36.401372   73258 system_pods.go:89] "etcd-flannel-024908" [750b89e8-a3d5-4298-8de2-0f8813e3f04f] Running
	I0919 23:36:36.401380   73258 system_pods.go:89] "kube-apiserver-flannel-024908" [6fd5e3be-9011-4355-ac6e-edce6a72430c] Running
	I0919 23:36:36.401386   73258 system_pods.go:89] "kube-controller-manager-flannel-024908" [991db4b7-f11c-492c-bb87-2291620f98c3] Running
	I0919 23:36:36.401396   73258 system_pods.go:89] "kube-proxy-5ch96" [1e8a306d-3338-4763-a77f-02cdca9beefa] Running
	I0919 23:36:36.401401   73258 system_pods.go:89] "kube-scheduler-flannel-024908" [4bde65b7-9774-486e-b36b-fd4f0f4ad913] Running
	I0919 23:36:36.401408   73258 system_pods.go:89] "storage-provisioner" [8e0b0c57-dc02-42f5-a76e-71f66be07956] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 23:36:36.401425   73258 retry.go:31] will retry after 318.074654ms: missing components: kube-dns
	I0919 23:36:36.730556   73258 system_pods.go:86] 7 kube-system pods found
	I0919 23:36:36.730609   73258 system_pods.go:89] "coredns-66bc5c9577-6ff4s" [bfeeb5e4-29d3-4cac-a8d8-f50cea911379] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:36:36.730618   73258 system_pods.go:89] "etcd-flannel-024908" [750b89e8-a3d5-4298-8de2-0f8813e3f04f] Running
	I0919 23:36:36.730640   73258 system_pods.go:89] "kube-apiserver-flannel-024908" [6fd5e3be-9011-4355-ac6e-edce6a72430c] Running
	I0919 23:36:36.730651   73258 system_pods.go:89] "kube-controller-manager-flannel-024908" [991db4b7-f11c-492c-bb87-2291620f98c3] Running
	I0919 23:36:36.730655   73258 system_pods.go:89] "kube-proxy-5ch96" [1e8a306d-3338-4763-a77f-02cdca9beefa] Running
	I0919 23:36:36.730659   73258 system_pods.go:89] "kube-scheduler-flannel-024908" [4bde65b7-9774-486e-b36b-fd4f0f4ad913] Running
	I0919 23:36:36.730664   73258 system_pods.go:89] "storage-provisioner" [8e0b0c57-dc02-42f5-a76e-71f66be07956] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 23:36:36.730678   73258 retry.go:31] will retry after 300.035963ms: missing components: kube-dns
	I0919 23:36:37.037282   73258 system_pods.go:86] 7 kube-system pods found
	I0919 23:36:37.037323   73258 system_pods.go:89] "coredns-66bc5c9577-6ff4s" [bfeeb5e4-29d3-4cac-a8d8-f50cea911379] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:36:37.037332   73258 system_pods.go:89] "etcd-flannel-024908" [750b89e8-a3d5-4298-8de2-0f8813e3f04f] Running
	I0919 23:36:37.037344   73258 system_pods.go:89] "kube-apiserver-flannel-024908" [6fd5e3be-9011-4355-ac6e-edce6a72430c] Running
	I0919 23:36:37.037350   73258 system_pods.go:89] "kube-controller-manager-flannel-024908" [991db4b7-f11c-492c-bb87-2291620f98c3] Running
	I0919 23:36:37.037355   73258 system_pods.go:89] "kube-proxy-5ch96" [1e8a306d-3338-4763-a77f-02cdca9beefa] Running
	I0919 23:36:37.037360   73258 system_pods.go:89] "kube-scheduler-flannel-024908" [4bde65b7-9774-486e-b36b-fd4f0f4ad913] Running
	I0919 23:36:37.037367   73258 system_pods.go:89] "storage-provisioner" [8e0b0c57-dc02-42f5-a76e-71f66be07956] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 23:36:37.037382   73258 retry.go:31] will retry after 557.978506ms: missing components: kube-dns
	I0919 23:36:37.600432   73258 system_pods.go:86] 7 kube-system pods found
	I0919 23:36:37.600472   73258 system_pods.go:89] "coredns-66bc5c9577-6ff4s" [bfeeb5e4-29d3-4cac-a8d8-f50cea911379] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:36:37.600480   73258 system_pods.go:89] "etcd-flannel-024908" [750b89e8-a3d5-4298-8de2-0f8813e3f04f] Running
	I0919 23:36:37.600488   73258 system_pods.go:89] "kube-apiserver-flannel-024908" [6fd5e3be-9011-4355-ac6e-edce6a72430c] Running
	I0919 23:36:37.600493   73258 system_pods.go:89] "kube-controller-manager-flannel-024908" [991db4b7-f11c-492c-bb87-2291620f98c3] Running
	I0919 23:36:37.600499   73258 system_pods.go:89] "kube-proxy-5ch96" [1e8a306d-3338-4763-a77f-02cdca9beefa] Running
	I0919 23:36:37.600503   73258 system_pods.go:89] "kube-scheduler-flannel-024908" [4bde65b7-9774-486e-b36b-fd4f0f4ad913] Running
	I0919 23:36:37.600508   73258 system_pods.go:89] "storage-provisioner" [8e0b0c57-dc02-42f5-a76e-71f66be07956] Running
	I0919 23:36:37.600525   73258 retry.go:31] will retry after 650.280663ms: missing components: kube-dns
	I0919 23:36:38.257373   73258 system_pods.go:86] 7 kube-system pods found
	I0919 23:36:38.257415   73258 system_pods.go:89] "coredns-66bc5c9577-6ff4s" [bfeeb5e4-29d3-4cac-a8d8-f50cea911379] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:36:38.257424   73258 system_pods.go:89] "etcd-flannel-024908" [750b89e8-a3d5-4298-8de2-0f8813e3f04f] Running
	I0919 23:36:38.257437   73258 system_pods.go:89] "kube-apiserver-flannel-024908" [6fd5e3be-9011-4355-ac6e-edce6a72430c] Running
	I0919 23:36:38.257451   73258 system_pods.go:89] "kube-controller-manager-flannel-024908" [991db4b7-f11c-492c-bb87-2291620f98c3] Running
	I0919 23:36:38.257456   73258 system_pods.go:89] "kube-proxy-5ch96" [1e8a306d-3338-4763-a77f-02cdca9beefa] Running
	I0919 23:36:38.257462   73258 system_pods.go:89] "kube-scheduler-flannel-024908" [4bde65b7-9774-486e-b36b-fd4f0f4ad913] Running
	I0919 23:36:38.257472   73258 system_pods.go:89] "storage-provisioner" [8e0b0c57-dc02-42f5-a76e-71f66be07956] Running
	I0919 23:36:38.257488   73258 retry.go:31] will retry after 900.725007ms: missing components: kube-dns
	I0919 23:36:39.166304   73258 system_pods.go:86] 7 kube-system pods found
	I0919 23:36:39.166367   73258 system_pods.go:89] "coredns-66bc5c9577-6ff4s" [bfeeb5e4-29d3-4cac-a8d8-f50cea911379] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:36:39.166379   73258 system_pods.go:89] "etcd-flannel-024908" [750b89e8-a3d5-4298-8de2-0f8813e3f04f] Running
	I0919 23:36:39.166388   73258 system_pods.go:89] "kube-apiserver-flannel-024908" [6fd5e3be-9011-4355-ac6e-edce6a72430c] Running
	I0919 23:36:39.166413   73258 system_pods.go:89] "kube-controller-manager-flannel-024908" [991db4b7-f11c-492c-bb87-2291620f98c3] Running
	I0919 23:36:39.166421   73258 system_pods.go:89] "kube-proxy-5ch96" [1e8a306d-3338-4763-a77f-02cdca9beefa] Running
	I0919 23:36:39.166426   73258 system_pods.go:89] "kube-scheduler-flannel-024908" [4bde65b7-9774-486e-b36b-fd4f0f4ad913] Running
	I0919 23:36:39.166430   73258 system_pods.go:89] "storage-provisioner" [8e0b0c57-dc02-42f5-a76e-71f66be07956] Running
	I0919 23:36:39.166447   73258 retry.go:31] will retry after 950.016778ms: missing components: kube-dns
	I0919 23:36:37.247291   73436 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0919 23:36:37.247309   73436 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0919 23:36:37.247333   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) Calling .GetSSHHostname
	I0919 23:36:37.247539   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) DBG | domain default-k8s-diff-port-304197 has defined MAC address 52:54:00:9c:dc:cc in network mk-default-k8s-diff-port-304197
	I0919 23:36:37.248720   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:dc:cc", ip: ""} in network mk-default-k8s-diff-port-304197: {Iface:virbr1 ExpiryTime:2025-09-20 00:36:14 +0000 UTC Type:0 Mac:52:54:00:9c:dc:cc Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:default-k8s-diff-port-304197 Clientid:01:52:54:00:9c:dc:cc}
	I0919 23:36:37.248764   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) DBG | domain default-k8s-diff-port-304197 has defined IP address 192.168.39.80 and MAC address 52:54:00:9c:dc:cc in network mk-default-k8s-diff-port-304197
	I0919 23:36:37.249161   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) DBG | domain default-k8s-diff-port-304197 has defined MAC address 52:54:00:9c:dc:cc in network mk-default-k8s-diff-port-304197
	I0919 23:36:37.249691   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) Calling .GetSSHPort
	I0919 23:36:37.249958   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) Calling .GetSSHKeyPath
	I0919 23:36:37.250146   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) Calling .GetSSHUsername
	I0919 23:36:37.250302   73436 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21594-14764/.minikube/machines/default-k8s-diff-port-304197/id_rsa Username:docker}
	I0919 23:36:37.250373   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:dc:cc", ip: ""} in network mk-default-k8s-diff-port-304197: {Iface:virbr1 ExpiryTime:2025-09-20 00:36:14 +0000 UTC Type:0 Mac:52:54:00:9c:dc:cc Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:default-k8s-diff-port-304197 Clientid:01:52:54:00:9c:dc:cc}
	I0919 23:36:37.250400   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) DBG | domain default-k8s-diff-port-304197 has defined IP address 192.168.39.80 and MAC address 52:54:00:9c:dc:cc in network mk-default-k8s-diff-port-304197
	I0919 23:36:37.251283   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) Calling .GetSSHPort
	I0919 23:36:37.251973   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) Calling .GetSSHKeyPath
	I0919 23:36:37.252237   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) Calling .GetSSHUsername
	I0919 23:36:37.252742   73436 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21594-14764/.minikube/machines/default-k8s-diff-port-304197/id_rsa Username:docker}
	I0919 23:36:37.253166   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) DBG | domain default-k8s-diff-port-304197 has defined MAC address 52:54:00:9c:dc:cc in network mk-default-k8s-diff-port-304197
	I0919 23:36:37.253800   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:dc:cc", ip: ""} in network mk-default-k8s-diff-port-304197: {Iface:virbr1 ExpiryTime:2025-09-20 00:36:14 +0000 UTC Type:0 Mac:52:54:00:9c:dc:cc Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:default-k8s-diff-port-304197 Clientid:01:52:54:00:9c:dc:cc}
	I0919 23:36:37.253825   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) DBG | domain default-k8s-diff-port-304197 has defined IP address 192.168.39.80 and MAC address 52:54:00:9c:dc:cc in network mk-default-k8s-diff-port-304197
	I0919 23:36:37.254089   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) Calling .GetSSHPort
	I0919 23:36:37.254274   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) Calling .GetSSHKeyPath
	I0919 23:36:37.254419   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) Calling .GetSSHUsername
	I0919 23:36:37.254583   73436 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21594-14764/.minikube/machines/default-k8s-diff-port-304197/id_rsa Username:docker}
	I0919 23:36:37.263061   73436 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37149
	I0919 23:36:37.263631   73436 main.go:141] libmachine: () Calling .GetVersion
	I0919 23:36:37.264310   73436 main.go:141] libmachine: Using API Version  1
	I0919 23:36:37.264333   73436 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 23:36:37.264709   73436 main.go:141] libmachine: () Calling .GetMachineName
	I0919 23:36:37.264888   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) Calling .GetState
	I0919 23:36:37.267085   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) Calling .DriverName
	I0919 23:36:37.267305   73436 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 23:36:37.267326   73436 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 23:36:37.267345   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) Calling .GetSSHHostname
	I0919 23:36:37.272794   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) DBG | domain default-k8s-diff-port-304197 has defined MAC address 52:54:00:9c:dc:cc in network mk-default-k8s-diff-port-304197
	I0919 23:36:37.273555   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9c:dc:cc", ip: ""} in network mk-default-k8s-diff-port-304197: {Iface:virbr1 ExpiryTime:2025-09-20 00:36:14 +0000 UTC Type:0 Mac:52:54:00:9c:dc:cc Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:default-k8s-diff-port-304197 Clientid:01:52:54:00:9c:dc:cc}
	I0919 23:36:37.273588   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) DBG | domain default-k8s-diff-port-304197 has defined IP address 192.168.39.80 and MAC address 52:54:00:9c:dc:cc in network mk-default-k8s-diff-port-304197
	I0919 23:36:37.273960   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) Calling .GetSSHPort
	I0919 23:36:37.274233   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) Calling .GetSSHKeyPath
	I0919 23:36:37.274382   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) Calling .GetSSHUsername
	I0919 23:36:37.274543   73436 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21594-14764/.minikube/machines/default-k8s-diff-port-304197/id_rsa Username:docker}
	I0919 23:36:37.528260   73436 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 23:36:37.571831   73436 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-304197" to be "Ready" ...
	I0919 23:36:37.575164   73436 node_ready.go:49] node "default-k8s-diff-port-304197" is "Ready"
	I0919 23:36:37.575195   73436 node_ready.go:38] duration metric: took 3.335681ms for node "default-k8s-diff-port-304197" to be "Ready" ...
	I0919 23:36:37.575213   73436 api_server.go:52] waiting for apiserver process to appear ...
	I0919 23:36:37.575269   73436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 23:36:37.601936   73436 api_server.go:72] duration metric: took 415.188466ms to wait for apiserver process to appear ...
	I0919 23:36:37.601962   73436 api_server.go:88] waiting for apiserver healthz status ...
	I0919 23:36:37.601984   73436 api_server.go:253] Checking apiserver healthz at https://192.168.39.80:8444/healthz ...
	I0919 23:36:37.614609   73436 api_server.go:279] https://192.168.39.80:8444/healthz returned 200:
	ok
	I0919 23:36:37.616271   73436 api_server.go:141] control plane version: v1.34.0
	I0919 23:36:37.616301   73436 api_server.go:131] duration metric: took 14.330865ms to wait for apiserver health ...
	I0919 23:36:37.616313   73436 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 23:36:37.624253   73436 system_pods.go:59] 8 kube-system pods found
	I0919 23:36:37.624283   73436 system_pods.go:61] "coredns-66bc5c9577-qxgj9" [f6340754-da46-4e31-9f54-feec6a797beb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:36:37.624290   73436 system_pods.go:61] "etcd-default-k8s-diff-port-304197" [f55798f9-1fbd-45f9-9428-1814a72e1128] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 23:36:37.624301   73436 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-304197" [98d39265-9747-45f9-a05b-8791e24fba53] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 23:36:37.624313   73436 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-304197" [d9f5b4bd-1110-4db6-9935-0a0645a71b0d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 23:36:37.624317   73436 system_pods.go:61] "kube-proxy-hr2bk" [02b8b6af-3927-4e0c-a567-28aca5e8cd79] Running
	I0919 23:36:37.624322   73436 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-304197" [15d79a55-0413-42e7-ad1e-394df4d34730] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 23:36:37.624326   73436 system_pods.go:61] "metrics-server-746fcd58dc-7rhgt" [64e629d6-5d4b-49e5-ac73-5a67b6f877b4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 23:36:37.624330   73436 system_pods.go:61] "storage-provisioner" [3321717a-b901-415e-b199-977471c0ff1f] Running
	I0919 23:36:37.624335   73436 system_pods.go:74] duration metric: took 8.015678ms to wait for pod list to return data ...
	I0919 23:36:37.624342   73436 default_sa.go:34] waiting for default service account to be created ...
	I0919 23:36:37.632577   73436 default_sa.go:45] found service account: "default"
	I0919 23:36:37.632605   73436 default_sa.go:55] duration metric: took 8.255844ms for default service account to be created ...
	I0919 23:36:37.632619   73436 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 23:36:37.645788   73436 system_pods.go:86] 8 kube-system pods found
	I0919 23:36:37.645893   73436 system_pods.go:89] "coredns-66bc5c9577-qxgj9" [f6340754-da46-4e31-9f54-feec6a797beb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:36:37.645924   73436 system_pods.go:89] "etcd-default-k8s-diff-port-304197" [f55798f9-1fbd-45f9-9428-1814a72e1128] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 23:36:37.645935   73436 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-304197" [98d39265-9747-45f9-a05b-8791e24fba53] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 23:36:37.645945   73436 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-304197" [d9f5b4bd-1110-4db6-9935-0a0645a71b0d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 23:36:37.645951   73436 system_pods.go:89] "kube-proxy-hr2bk" [02b8b6af-3927-4e0c-a567-28aca5e8cd79] Running
	I0919 23:36:37.645960   73436 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-304197" [15d79a55-0413-42e7-ad1e-394df4d34730] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 23:36:37.645967   73436 system_pods.go:89] "metrics-server-746fcd58dc-7rhgt" [64e629d6-5d4b-49e5-ac73-5a67b6f877b4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 23:36:37.645972   73436 system_pods.go:89] "storage-provisioner" [3321717a-b901-415e-b199-977471c0ff1f] Running
	I0919 23:36:37.646021   73436 system_pods.go:126] duration metric: took 13.383133ms to wait for k8s-apps to be running ...
	I0919 23:36:37.646044   73436 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 23:36:37.646114   73436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 23:36:37.677738   73436 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0919 23:36:37.677769   73436 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0919 23:36:37.700765   73436 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0919 23:36:37.700795   73436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0919 23:36:37.713949   73436 system_svc.go:56] duration metric: took 67.894523ms WaitForService to wait for kubelet
	I0919 23:36:37.713984   73436 kubeadm.go:578] duration metric: took 527.238284ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 23:36:37.714008   73436 node_conditions.go:102] verifying NodePressure condition ...
	I0919 23:36:37.722451   73436 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0919 23:36:37.722478   73436 node_conditions.go:123] node cpu capacity is 2
	I0919 23:36:37.722494   73436 node_conditions.go:105] duration metric: took 8.480884ms to run NodePressure ...
	I0919 23:36:37.722507   73436 start.go:241] waiting for startup goroutines ...
	I0919 23:36:37.747187   73436 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0919 23:36:37.747212   73436 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0919 23:36:37.750876   73436 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0919 23:36:37.750902   73436 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0919 23:36:37.787000   73436 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 23:36:37.787029   73436 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0919 23:36:37.788115   73436 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0919 23:36:37.788138   73436 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0919 23:36:37.855895   73436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 23:36:37.859323   73436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 23:36:37.878047   73436 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0919 23:36:37.878079   73436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0919 23:36:37.888244   73436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 23:36:37.978290   73436 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0919 23:36:37.978318   73436 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0919 23:36:38.035279   73436 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0919 23:36:38.035306   73436 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0919 23:36:38.127433   73436 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0919 23:36:38.127462   73436 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0919 23:36:38.241825   73436 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0919 23:36:38.241861   73436 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0919 23:36:38.323194   73436 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0919 23:36:38.323231   73436 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0919 23:36:38.438195   73436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0919 23:36:40.236932   73436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.377572555s)
	I0919 23:36:40.236980   73436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.381049316s)
	I0919 23:36:40.237001   73436 main.go:141] libmachine: Making call to close driver server
	I0919 23:36:40.236988   73436 main.go:141] libmachine: Making call to close driver server
	I0919 23:36:40.237012   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) Calling .Close
	I0919 23:36:40.237015   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) Calling .Close
	I0919 23:36:40.237106   73436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.348776664s)
	I0919 23:36:40.237128   73436 main.go:141] libmachine: Making call to close driver server
	I0919 23:36:40.237140   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) Calling .Close
	I0919 23:36:40.237679   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) DBG | Closing plugin on server side
	I0919 23:36:40.237686   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) DBG | Closing plugin on server side
	I0919 23:36:40.237686   73436 main.go:141] libmachine: Successfully made call to close driver server
	I0919 23:36:40.237707   73436 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 23:36:40.237711   73436 main.go:141] libmachine: Successfully made call to close driver server
	I0919 23:36:40.237717   73436 main.go:141] libmachine: Making call to close driver server
	I0919 23:36:40.237720   73436 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 23:36:40.237749   73436 main.go:141] libmachine: Making call to close driver server
	I0919 23:36:40.237757   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) Calling .Close
	I0919 23:36:40.237738   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) Calling .Close
	I0919 23:36:40.239407   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) DBG | Closing plugin on server side
	I0919 23:36:40.239438   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) DBG | Closing plugin on server side
	I0919 23:36:40.239458   73436 main.go:141] libmachine: Successfully made call to close driver server
	I0919 23:36:40.239467   73436 main.go:141] libmachine: Successfully made call to close driver server
	I0919 23:36:40.239474   73436 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 23:36:40.239476   73436 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 23:36:40.239485   73436 main.go:141] libmachine: Making call to close driver server
	I0919 23:36:40.239488   73436 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-304197"
	I0919 23:36:40.239493   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) Calling .Close
	I0919 23:36:40.239568   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) DBG | Closing plugin on server side
	I0919 23:36:40.239594   73436 main.go:141] libmachine: Successfully made call to close driver server
	I0919 23:36:40.239601   73436 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 23:36:40.239857   73436 main.go:141] libmachine: Successfully made call to close driver server
	I0919 23:36:40.239908   73436 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 23:36:40.286800   73436 main.go:141] libmachine: Making call to close driver server
	I0919 23:36:40.286822   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) Calling .Close
	I0919 23:36:40.287186   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) DBG | Closing plugin on server side
	I0919 23:36:40.287212   73436 main.go:141] libmachine: Successfully made call to close driver server
	I0919 23:36:40.287226   73436 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 23:36:40.563396   73436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.125143236s)
	I0919 23:36:40.563464   73436 main.go:141] libmachine: Making call to close driver server
	I0919 23:36:40.563481   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) Calling .Close
	I0919 23:36:40.563840   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) DBG | Closing plugin on server side
	I0919 23:36:40.563881   73436 main.go:141] libmachine: Successfully made call to close driver server
	I0919 23:36:40.563894   73436 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 23:36:40.563902   73436 main.go:141] libmachine: Making call to close driver server
	I0919 23:36:40.563910   73436 main.go:141] libmachine: (default-k8s-diff-port-304197) Calling .Close
	I0919 23:36:40.564146   73436 main.go:141] libmachine: Successfully made call to close driver server
	I0919 23:36:40.564158   73436 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 23:36:40.565917   73436 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-304197 addons enable metrics-server
	
	I0919 23:36:40.567165   73436 out.go:179] * Enabled addons: metrics-server, storage-provisioner, default-storageclass, dashboard
	I0919 23:36:40.568256   73436 addons.go:514] duration metric: took 3.381487183s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass dashboard]
	I0919 23:36:40.568292   73436 start.go:246] waiting for cluster config update ...
	I0919 23:36:40.568303   73436 start.go:255] writing updated cluster config ...
	I0919 23:36:40.568541   73436 ssh_runner.go:195] Run: rm -f paused
	I0919 23:36:40.579261   73436 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 23:36:40.593680   73436 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-qxgj9" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:36:38.632546   75550 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0919 23:36:38.632722   75550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 23:36:38.632798   75550 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 23:36:38.648127   75550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39877
	I0919 23:36:38.648866   75550 main.go:141] libmachine: () Calling .GetVersion
	I0919 23:36:38.649539   75550 main.go:141] libmachine: Using API Version  1
	I0919 23:36:38.649566   75550 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 23:36:38.649985   75550 main.go:141] libmachine: () Calling .GetMachineName
	I0919 23:36:38.650173   75550 main.go:141] libmachine: (bridge-024908) Calling .GetMachineName
	I0919 23:36:38.650321   75550 main.go:141] libmachine: (bridge-024908) Calling .DriverName
	I0919 23:36:38.650488   75550 start.go:159] libmachine.API.Create for "bridge-024908" (driver="kvm2")
	I0919 23:36:38.650520   75550 client.go:168] LocalClient.Create starting
	I0919 23:36:38.650557   75550 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14764/.minikube/certs/ca.pem
	I0919 23:36:38.650612   75550 main.go:141] libmachine: Decoding PEM data...
	I0919 23:36:38.650632   75550 main.go:141] libmachine: Parsing certificate...
	I0919 23:36:38.650742   75550 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14764/.minikube/certs/cert.pem
	I0919 23:36:38.650773   75550 main.go:141] libmachine: Decoding PEM data...
	I0919 23:36:38.650791   75550 main.go:141] libmachine: Parsing certificate...
	I0919 23:36:38.650813   75550 main.go:141] libmachine: Running pre-create checks...
	I0919 23:36:38.650835   75550 main.go:141] libmachine: (bridge-024908) Calling .PreCreateCheck
	I0919 23:36:38.651375   75550 main.go:141] libmachine: (bridge-024908) Calling .GetConfigRaw
	I0919 23:36:38.651916   75550 main.go:141] libmachine: Creating machine...
	I0919 23:36:38.651933   75550 main.go:141] libmachine: (bridge-024908) Calling .Create
	I0919 23:36:38.652114   75550 main.go:141] libmachine: (bridge-024908) creating domain...
	I0919 23:36:38.652135   75550 main.go:141] libmachine: (bridge-024908) creating network...
	I0919 23:36:38.653894   75550 main.go:141] libmachine: (bridge-024908) DBG | found existing default network
	I0919 23:36:38.654138   75550 main.go:141] libmachine: (bridge-024908) DBG | <network connections='3'>
	I0919 23:36:38.654163   75550 main.go:141] libmachine: (bridge-024908) DBG |   <name>default</name>
	I0919 23:36:38.654269   75550 main.go:141] libmachine: (bridge-024908) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I0919 23:36:38.654292   75550 main.go:141] libmachine: (bridge-024908) DBG |   <forward mode='nat'>
	I0919 23:36:38.654302   75550 main.go:141] libmachine: (bridge-024908) DBG |     <nat>
	I0919 23:36:38.654311   75550 main.go:141] libmachine: (bridge-024908) DBG |       <port start='1024' end='65535'/>
	I0919 23:36:38.654322   75550 main.go:141] libmachine: (bridge-024908) DBG |     </nat>
	I0919 23:36:38.654331   75550 main.go:141] libmachine: (bridge-024908) DBG |   </forward>
	I0919 23:36:38.654341   75550 main.go:141] libmachine: (bridge-024908) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I0919 23:36:38.654350   75550 main.go:141] libmachine: (bridge-024908) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I0919 23:36:38.654360   75550 main.go:141] libmachine: (bridge-024908) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I0919 23:36:38.654369   75550 main.go:141] libmachine: (bridge-024908) DBG |     <dhcp>
	I0919 23:36:38.654379   75550 main.go:141] libmachine: (bridge-024908) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I0919 23:36:38.654390   75550 main.go:141] libmachine: (bridge-024908) DBG |     </dhcp>
	I0919 23:36:38.654400   75550 main.go:141] libmachine: (bridge-024908) DBG |   </ip>
	I0919 23:36:38.654407   75550 main.go:141] libmachine: (bridge-024908) DBG | </network>
	I0919 23:36:38.654419   75550 main.go:141] libmachine: (bridge-024908) DBG | 
	I0919 23:36:38.655330   75550 main.go:141] libmachine: (bridge-024908) DBG | I0919 23:36:38.655113   75578 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:83:fa:c4} reservation:<nil>}
	I0919 23:36:38.656238   75550 main.go:141] libmachine: (bridge-024908) DBG | I0919 23:36:38.656153   75578 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:94:5d:37} reservation:<nil>}
	I0919 23:36:38.657335   75550 main.go:141] libmachine: (bridge-024908) DBG | I0919 23:36:38.657233   75578 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000117a00}
	I0919 23:36:38.657375   75550 main.go:141] libmachine: (bridge-024908) DBG | defining private network:
	I0919 23:36:38.657385   75550 main.go:141] libmachine: (bridge-024908) DBG | 
	I0919 23:36:38.657395   75550 main.go:141] libmachine: (bridge-024908) DBG | <network>
	I0919 23:36:38.657403   75550 main.go:141] libmachine: (bridge-024908) DBG |   <name>mk-bridge-024908</name>
	I0919 23:36:38.657415   75550 main.go:141] libmachine: (bridge-024908) DBG |   <dns enable='no'/>
	I0919 23:36:38.657425   75550 main.go:141] libmachine: (bridge-024908) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0919 23:36:38.657450   75550 main.go:141] libmachine: (bridge-024908) DBG |     <dhcp>
	I0919 23:36:38.657469   75550 main.go:141] libmachine: (bridge-024908) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0919 23:36:38.657482   75550 main.go:141] libmachine: (bridge-024908) DBG |     </dhcp>
	I0919 23:36:38.657492   75550 main.go:141] libmachine: (bridge-024908) DBG |   </ip>
	I0919 23:36:38.657499   75550 main.go:141] libmachine: (bridge-024908) DBG | </network>
	I0919 23:36:38.657504   75550 main.go:141] libmachine: (bridge-024908) DBG | 
	I0919 23:36:38.664316   75550 main.go:141] libmachine: (bridge-024908) DBG | creating private network mk-bridge-024908 192.168.61.0/24...
	I0919 23:36:38.762441   75550 main.go:141] libmachine: (bridge-024908) DBG | private network mk-bridge-024908 192.168.61.0/24 created
	I0919 23:36:38.762844   75550 main.go:141] libmachine: (bridge-024908) DBG | <network>
	I0919 23:36:38.762865   75550 main.go:141] libmachine: (bridge-024908) DBG |   <name>mk-bridge-024908</name>
	I0919 23:36:38.762876   75550 main.go:141] libmachine: (bridge-024908) setting up store path in /home/jenkins/minikube-integration/21594-14764/.minikube/machines/bridge-024908 ...
	I0919 23:36:38.762919   75550 main.go:141] libmachine: (bridge-024908) building disk image from file:///home/jenkins/minikube-integration/21594-14764/.minikube/cache/iso/amd64/minikube-v1.37.0-amd64.iso
	I0919 23:36:38.762962   75550 main.go:141] libmachine: (bridge-024908) DBG |   <uuid>95a13fb5-b512-4326-8477-f3c0bf269579</uuid>
	I0919 23:36:38.762979   75550 main.go:141] libmachine: (bridge-024908) DBG |   <bridge name='virbr3' stp='on' delay='0'/>
	I0919 23:36:38.762993   75550 main.go:141] libmachine: (bridge-024908) DBG |   <mac address='52:54:00:15:e2:8d'/>
	I0919 23:36:38.763017   75550 main.go:141] libmachine: (bridge-024908) Downloading /home/jenkins/minikube-integration/21594-14764/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21594-14764/.minikube/cache/iso/amd64/minikube-v1.37.0-amd64.iso...
	I0919 23:36:38.763037   75550 main.go:141] libmachine: (bridge-024908) DBG |   <dns enable='no'/>
	I0919 23:36:38.763070   75550 main.go:141] libmachine: (bridge-024908) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0919 23:36:38.763082   75550 main.go:141] libmachine: (bridge-024908) DBG |     <dhcp>
	I0919 23:36:38.763092   75550 main.go:141] libmachine: (bridge-024908) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0919 23:36:38.763106   75550 main.go:141] libmachine: (bridge-024908) DBG |     </dhcp>
	I0919 23:36:38.763118   75550 main.go:141] libmachine: (bridge-024908) DBG |   </ip>
	I0919 23:36:38.763127   75550 main.go:141] libmachine: (bridge-024908) DBG | </network>
	I0919 23:36:38.763138   75550 main.go:141] libmachine: (bridge-024908) DBG | 
	I0919 23:36:38.763150   75550 main.go:141] libmachine: (bridge-024908) DBG | I0919 23:36:38.762795   75578 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21594-14764/.minikube
	I0919 23:36:39.053562   75550 main.go:141] libmachine: (bridge-024908) DBG | I0919 23:36:39.053398   75578 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21594-14764/.minikube/machines/bridge-024908/id_rsa...
	I0919 23:36:39.390093   75550 main.go:141] libmachine: (bridge-024908) DBG | I0919 23:36:39.389883   75578 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21594-14764/.minikube/machines/bridge-024908/bridge-024908.rawdisk...
	I0919 23:36:39.390129   75550 main.go:141] libmachine: (bridge-024908) DBG | Writing magic tar header
	I0919 23:36:39.390176   75550 main.go:141] libmachine: (bridge-024908) DBG | Writing SSH key tar header
	I0919 23:36:39.390207   75550 main.go:141] libmachine: (bridge-024908) DBG | I0919 23:36:39.390069   75578 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21594-14764/.minikube/machines/bridge-024908 ...
	I0919 23:36:39.390228   75550 main.go:141] libmachine: (bridge-024908) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21594-14764/.minikube/machines/bridge-024908
	I0919 23:36:39.390244   75550 main.go:141] libmachine: (bridge-024908) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21594-14764/.minikube/machines
	I0919 23:36:39.390259   75550 main.go:141] libmachine: (bridge-024908) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21594-14764/.minikube
	I0919 23:36:39.390311   75550 main.go:141] libmachine: (bridge-024908) setting executable bit set on /home/jenkins/minikube-integration/21594-14764/.minikube/machines/bridge-024908 (perms=drwx------)
	I0919 23:36:39.390327   75550 main.go:141] libmachine: (bridge-024908) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21594-14764
	I0919 23:36:39.390337   75550 main.go:141] libmachine: (bridge-024908) setting executable bit set on /home/jenkins/minikube-integration/21594-14764/.minikube/machines (perms=drwxr-xr-x)
	I0919 23:36:39.390347   75550 main.go:141] libmachine: (bridge-024908) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0919 23:36:39.390357   75550 main.go:141] libmachine: (bridge-024908) DBG | checking permissions on dir: /home/jenkins
	I0919 23:36:39.390370   75550 main.go:141] libmachine: (bridge-024908) DBG | checking permissions on dir: /home
	I0919 23:36:39.390381   75550 main.go:141] libmachine: (bridge-024908) DBG | skipping /home - not owner
	I0919 23:36:39.390399   75550 main.go:141] libmachine: (bridge-024908) setting executable bit set on /home/jenkins/minikube-integration/21594-14764/.minikube (perms=drwxr-xr-x)
	I0919 23:36:39.390414   75550 main.go:141] libmachine: (bridge-024908) setting executable bit set on /home/jenkins/minikube-integration/21594-14764 (perms=drwxrwxr-x)
	I0919 23:36:39.390429   75550 main.go:141] libmachine: (bridge-024908) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0919 23:36:39.390437   75550 main.go:141] libmachine: (bridge-024908) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0919 23:36:39.390480   75550 main.go:141] libmachine: (bridge-024908) defining domain...
	I0919 23:36:39.392184   75550 main.go:141] libmachine: (bridge-024908) defining domain using XML: 
	I0919 23:36:39.392209   75550 main.go:141] libmachine: (bridge-024908) <domain type='kvm'>
	I0919 23:36:39.392220   75550 main.go:141] libmachine: (bridge-024908)   <name>bridge-024908</name>
	I0919 23:36:39.392227   75550 main.go:141] libmachine: (bridge-024908)   <memory unit='MiB'>3072</memory>
	I0919 23:36:39.392235   75550 main.go:141] libmachine: (bridge-024908)   <vcpu>2</vcpu>
	I0919 23:36:39.392242   75550 main.go:141] libmachine: (bridge-024908)   <features>
	I0919 23:36:39.392251   75550 main.go:141] libmachine: (bridge-024908)     <acpi/>
	I0919 23:36:39.392264   75550 main.go:141] libmachine: (bridge-024908)     <apic/>
	I0919 23:36:39.392287   75550 main.go:141] libmachine: (bridge-024908)     <pae/>
	I0919 23:36:39.392295   75550 main.go:141] libmachine: (bridge-024908)   </features>
	I0919 23:36:39.392311   75550 main.go:141] libmachine: (bridge-024908)   <cpu mode='host-passthrough'>
	I0919 23:36:39.392317   75550 main.go:141] libmachine: (bridge-024908)   </cpu>
	I0919 23:36:39.392325   75550 main.go:141] libmachine: (bridge-024908)   <os>
	I0919 23:36:39.392331   75550 main.go:141] libmachine: (bridge-024908)     <type>hvm</type>
	I0919 23:36:39.392338   75550 main.go:141] libmachine: (bridge-024908)     <boot dev='cdrom'/>
	I0919 23:36:39.392344   75550 main.go:141] libmachine: (bridge-024908)     <boot dev='hd'/>
	I0919 23:36:39.392352   75550 main.go:141] libmachine: (bridge-024908)     <bootmenu enable='no'/>
	I0919 23:36:39.392358   75550 main.go:141] libmachine: (bridge-024908)   </os>
	I0919 23:36:39.392366   75550 main.go:141] libmachine: (bridge-024908)   <devices>
	I0919 23:36:39.392373   75550 main.go:141] libmachine: (bridge-024908)     <disk type='file' device='cdrom'>
	I0919 23:36:39.392386   75550 main.go:141] libmachine: (bridge-024908)       <source file='/home/jenkins/minikube-integration/21594-14764/.minikube/machines/bridge-024908/boot2docker.iso'/>
	I0919 23:36:39.392393   75550 main.go:141] libmachine: (bridge-024908)       <target dev='hdc' bus='scsi'/>
	I0919 23:36:39.392401   75550 main.go:141] libmachine: (bridge-024908)       <readonly/>
	I0919 23:36:39.392421   75550 main.go:141] libmachine: (bridge-024908)     </disk>
	I0919 23:36:39.392431   75550 main.go:141] libmachine: (bridge-024908)     <disk type='file' device='disk'>
	I0919 23:36:39.392440   75550 main.go:141] libmachine: (bridge-024908)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0919 23:36:39.392453   75550 main.go:141] libmachine: (bridge-024908)       <source file='/home/jenkins/minikube-integration/21594-14764/.minikube/machines/bridge-024908/bridge-024908.rawdisk'/>
	I0919 23:36:39.392460   75550 main.go:141] libmachine: (bridge-024908)       <target dev='hda' bus='virtio'/>
	I0919 23:36:39.392468   75550 main.go:141] libmachine: (bridge-024908)     </disk>
	I0919 23:36:39.392474   75550 main.go:141] libmachine: (bridge-024908)     <interface type='network'>
	I0919 23:36:39.392484   75550 main.go:141] libmachine: (bridge-024908)       <source network='mk-bridge-024908'/>
	I0919 23:36:39.392490   75550 main.go:141] libmachine: (bridge-024908)       <model type='virtio'/>
	I0919 23:36:39.392498   75550 main.go:141] libmachine: (bridge-024908)     </interface>
	I0919 23:36:39.392504   75550 main.go:141] libmachine: (bridge-024908)     <interface type='network'>
	I0919 23:36:39.392513   75550 main.go:141] libmachine: (bridge-024908)       <source network='default'/>
	I0919 23:36:39.392519   75550 main.go:141] libmachine: (bridge-024908)       <model type='virtio'/>
	I0919 23:36:39.392527   75550 main.go:141] libmachine: (bridge-024908)     </interface>
	I0919 23:36:39.392533   75550 main.go:141] libmachine: (bridge-024908)     <serial type='pty'>
	I0919 23:36:39.392541   75550 main.go:141] libmachine: (bridge-024908)       <target port='0'/>
	I0919 23:36:39.392547   75550 main.go:141] libmachine: (bridge-024908)     </serial>
	I0919 23:36:39.392555   75550 main.go:141] libmachine: (bridge-024908)     <console type='pty'>
	I0919 23:36:39.392572   75550 main.go:141] libmachine: (bridge-024908)       <target type='serial' port='0'/>
	I0919 23:36:39.392580   75550 main.go:141] libmachine: (bridge-024908)     </console>
	I0919 23:36:39.392586   75550 main.go:141] libmachine: (bridge-024908)     <rng model='virtio'>
	I0919 23:36:39.392596   75550 main.go:141] libmachine: (bridge-024908)       <backend model='random'>/dev/random</backend>
	I0919 23:36:39.392603   75550 main.go:141] libmachine: (bridge-024908)     </rng>
	I0919 23:36:39.392611   75550 main.go:141] libmachine: (bridge-024908)   </devices>
	I0919 23:36:39.392623   75550 main.go:141] libmachine: (bridge-024908) </domain>
	I0919 23:36:39.392632   75550 main.go:141] libmachine: (bridge-024908) 
	I0919 23:36:39.398997   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined MAC address 52:54:00:c9:06:0c in network default
	I0919 23:36:39.399812   75550 main.go:141] libmachine: (bridge-024908) starting domain...
	I0919 23:36:39.399829   75550 main.go:141] libmachine: (bridge-024908) ensuring networks are active...
	I0919 23:36:39.399847   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:36:39.400874   75550 main.go:141] libmachine: (bridge-024908) Ensuring network default is active
	I0919 23:36:39.401267   75550 main.go:141] libmachine: (bridge-024908) Ensuring network mk-bridge-024908 is active
	I0919 23:36:39.402187   75550 main.go:141] libmachine: (bridge-024908) getting domain XML...
	I0919 23:36:39.403617   75550 main.go:141] libmachine: (bridge-024908) DBG | starting domain XML:
	I0919 23:36:39.403635   75550 main.go:141] libmachine: (bridge-024908) DBG | <domain type='kvm'>
	I0919 23:36:39.403644   75550 main.go:141] libmachine: (bridge-024908) DBG |   <name>bridge-024908</name>
	I0919 23:36:39.403654   75550 main.go:141] libmachine: (bridge-024908) DBG |   <uuid>edde9a5b-670d-4ac4-972d-a0f3dbabce20</uuid>
	I0919 23:36:39.403665   75550 main.go:141] libmachine: (bridge-024908) DBG |   <memory unit='KiB'>3145728</memory>
	I0919 23:36:39.403675   75550 main.go:141] libmachine: (bridge-024908) DBG |   <currentMemory unit='KiB'>3145728</currentMemory>
	I0919 23:36:39.403692   75550 main.go:141] libmachine: (bridge-024908) DBG |   <vcpu placement='static'>2</vcpu>
	I0919 23:36:39.403698   75550 main.go:141] libmachine: (bridge-024908) DBG |   <os>
	I0919 23:36:39.403708   75550 main.go:141] libmachine: (bridge-024908) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I0919 23:36:39.403714   75550 main.go:141] libmachine: (bridge-024908) DBG |     <boot dev='cdrom'/>
	I0919 23:36:39.403823   75550 main.go:141] libmachine: (bridge-024908) DBG |     <boot dev='hd'/>
	I0919 23:36:39.403871   75550 main.go:141] libmachine: (bridge-024908) DBG |     <bootmenu enable='no'/>
	I0919 23:36:39.403885   75550 main.go:141] libmachine: (bridge-024908) DBG |   </os>
	I0919 23:36:39.403892   75550 main.go:141] libmachine: (bridge-024908) DBG |   <features>
	I0919 23:36:39.403901   75550 main.go:141] libmachine: (bridge-024908) DBG |     <acpi/>
	I0919 23:36:39.403907   75550 main.go:141] libmachine: (bridge-024908) DBG |     <apic/>
	I0919 23:36:39.403915   75550 main.go:141] libmachine: (bridge-024908) DBG |     <pae/>
	I0919 23:36:39.403936   75550 main.go:141] libmachine: (bridge-024908) DBG |   </features>
	I0919 23:36:39.403962   75550 main.go:141] libmachine: (bridge-024908) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I0919 23:36:39.403976   75550 main.go:141] libmachine: (bridge-024908) DBG |   <clock offset='utc'/>
	I0919 23:36:39.403990   75550 main.go:141] libmachine: (bridge-024908) DBG |   <on_poweroff>destroy</on_poweroff>
	I0919 23:36:39.403998   75550 main.go:141] libmachine: (bridge-024908) DBG |   <on_reboot>restart</on_reboot>
	I0919 23:36:39.404012   75550 main.go:141] libmachine: (bridge-024908) DBG |   <on_crash>destroy</on_crash>
	I0919 23:36:39.404020   75550 main.go:141] libmachine: (bridge-024908) DBG |   <devices>
	I0919 23:36:39.404030   75550 main.go:141] libmachine: (bridge-024908) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I0919 23:36:39.404061   75550 main.go:141] libmachine: (bridge-024908) DBG |     <disk type='file' device='cdrom'>
	I0919 23:36:39.404107   75550 main.go:141] libmachine: (bridge-024908) DBG |       <driver name='qemu' type='raw'/>
	I0919 23:36:39.404141   75550 main.go:141] libmachine: (bridge-024908) DBG |       <source file='/home/jenkins/minikube-integration/21594-14764/.minikube/machines/bridge-024908/boot2docker.iso'/>
	I0919 23:36:39.404155   75550 main.go:141] libmachine: (bridge-024908) DBG |       <target dev='hdc' bus='scsi'/>
	I0919 23:36:39.404162   75550 main.go:141] libmachine: (bridge-024908) DBG |       <readonly/>
	I0919 23:36:39.404188   75550 main.go:141] libmachine: (bridge-024908) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I0919 23:36:39.404198   75550 main.go:141] libmachine: (bridge-024908) DBG |     </disk>
	I0919 23:36:39.404216   75550 main.go:141] libmachine: (bridge-024908) DBG |     <disk type='file' device='disk'>
	I0919 23:36:39.404224   75550 main.go:141] libmachine: (bridge-024908) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I0919 23:36:39.404238   75550 main.go:141] libmachine: (bridge-024908) DBG |       <source file='/home/jenkins/minikube-integration/21594-14764/.minikube/machines/bridge-024908/bridge-024908.rawdisk'/>
	I0919 23:36:39.404246   75550 main.go:141] libmachine: (bridge-024908) DBG |       <target dev='hda' bus='virtio'/>
	I0919 23:36:39.404257   75550 main.go:141] libmachine: (bridge-024908) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I0919 23:36:39.404263   75550 main.go:141] libmachine: (bridge-024908) DBG |     </disk>
	I0919 23:36:39.404273   75550 main.go:141] libmachine: (bridge-024908) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I0919 23:36:39.404283   75550 main.go:141] libmachine: (bridge-024908) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I0919 23:36:39.404292   75550 main.go:141] libmachine: (bridge-024908) DBG |     </controller>
	I0919 23:36:39.404301   75550 main.go:141] libmachine: (bridge-024908) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I0919 23:36:39.404310   75550 main.go:141] libmachine: (bridge-024908) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I0919 23:36:39.404320   75550 main.go:141] libmachine: (bridge-024908) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I0919 23:36:39.404328   75550 main.go:141] libmachine: (bridge-024908) DBG |     </controller>
	I0919 23:36:39.404336   75550 main.go:141] libmachine: (bridge-024908) DBG |     <interface type='network'>
	I0919 23:36:39.404344   75550 main.go:141] libmachine: (bridge-024908) DBG |       <mac address='52:54:00:6c:4a:f7'/>
	I0919 23:36:39.404352   75550 main.go:141] libmachine: (bridge-024908) DBG |       <source network='mk-bridge-024908'/>
	I0919 23:36:39.404369   75550 main.go:141] libmachine: (bridge-024908) DBG |       <model type='virtio'/>
	I0919 23:36:39.404379   75550 main.go:141] libmachine: (bridge-024908) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I0919 23:36:39.404387   75550 main.go:141] libmachine: (bridge-024908) DBG |     </interface>
	I0919 23:36:39.404394   75550 main.go:141] libmachine: (bridge-024908) DBG |     <interface type='network'>
	I0919 23:36:39.404403   75550 main.go:141] libmachine: (bridge-024908) DBG |       <mac address='52:54:00:c9:06:0c'/>
	I0919 23:36:39.404410   75550 main.go:141] libmachine: (bridge-024908) DBG |       <source network='default'/>
	I0919 23:36:39.404418   75550 main.go:141] libmachine: (bridge-024908) DBG |       <model type='virtio'/>
	I0919 23:36:39.404428   75550 main.go:141] libmachine: (bridge-024908) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I0919 23:36:39.404435   75550 main.go:141] libmachine: (bridge-024908) DBG |     </interface>
	I0919 23:36:39.404445   75550 main.go:141] libmachine: (bridge-024908) DBG |     <serial type='pty'>
	I0919 23:36:39.404454   75550 main.go:141] libmachine: (bridge-024908) DBG |       <target type='isa-serial' port='0'>
	I0919 23:36:39.404461   75550 main.go:141] libmachine: (bridge-024908) DBG |         <model name='isa-serial'/>
	I0919 23:36:39.404469   75550 main.go:141] libmachine: (bridge-024908) DBG |       </target>
	I0919 23:36:39.404475   75550 main.go:141] libmachine: (bridge-024908) DBG |     </serial>
	I0919 23:36:39.404483   75550 main.go:141] libmachine: (bridge-024908) DBG |     <console type='pty'>
	I0919 23:36:39.404490   75550 main.go:141] libmachine: (bridge-024908) DBG |       <target type='serial' port='0'/>
	I0919 23:36:39.404498   75550 main.go:141] libmachine: (bridge-024908) DBG |     </console>
	I0919 23:36:39.404506   75550 main.go:141] libmachine: (bridge-024908) DBG |     <input type='mouse' bus='ps2'/>
	I0919 23:36:39.404515   75550 main.go:141] libmachine: (bridge-024908) DBG |     <input type='keyboard' bus='ps2'/>
	I0919 23:36:39.404522   75550 main.go:141] libmachine: (bridge-024908) DBG |     <audio id='1' type='none'/>
	I0919 23:36:39.404530   75550 main.go:141] libmachine: (bridge-024908) DBG |     <memballoon model='virtio'>
	I0919 23:36:39.404539   75550 main.go:141] libmachine: (bridge-024908) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I0919 23:36:39.404547   75550 main.go:141] libmachine: (bridge-024908) DBG |     </memballoon>
	I0919 23:36:39.404553   75550 main.go:141] libmachine: (bridge-024908) DBG |     <rng model='virtio'>
	I0919 23:36:39.404562   75550 main.go:141] libmachine: (bridge-024908) DBG |       <backend model='random'>/dev/random</backend>
	I0919 23:36:39.404571   75550 main.go:141] libmachine: (bridge-024908) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I0919 23:36:39.404582   75550 main.go:141] libmachine: (bridge-024908) DBG |     </rng>
	I0919 23:36:39.404588   75550 main.go:141] libmachine: (bridge-024908) DBG |   </devices>
	I0919 23:36:39.404596   75550 main.go:141] libmachine: (bridge-024908) DBG | </domain>
	I0919 23:36:39.404602   75550 main.go:141] libmachine: (bridge-024908) DBG | 
	I0919 23:36:41.141041   75550 main.go:141] libmachine: (bridge-024908) waiting for domain to start...
	I0919 23:36:41.142627   75550 main.go:141] libmachine: (bridge-024908) domain is now running
	I0919 23:36:41.142650   75550 main.go:141] libmachine: (bridge-024908) waiting for IP...
	I0919 23:36:41.143576   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:36:41.144321   75550 main.go:141] libmachine: (bridge-024908) DBG | no network interface addresses found for domain bridge-024908 (source=lease)
	I0919 23:36:41.144368   75550 main.go:141] libmachine: (bridge-024908) DBG | trying to list again with source=arp
	I0919 23:36:41.147405   75550 main.go:141] libmachine: (bridge-024908) DBG | unable to find current IP address of domain bridge-024908 in network mk-bridge-024908 (interfaces detected: [])
	I0919 23:36:41.147508   75550 main.go:141] libmachine: (bridge-024908) DBG | I0919 23:36:41.147406   75578 retry.go:31] will retry after 302.394896ms: waiting for domain to come up
	I0919 23:36:41.452391   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:36:41.453227   75550 main.go:141] libmachine: (bridge-024908) DBG | no network interface addresses found for domain bridge-024908 (source=lease)
	I0919 23:36:41.453267   75550 main.go:141] libmachine: (bridge-024908) DBG | trying to list again with source=arp
	I0919 23:36:41.453782   75550 main.go:141] libmachine: (bridge-024908) DBG | unable to find current IP address of domain bridge-024908 in network mk-bridge-024908 (interfaces detected: [])
	I0919 23:36:41.453813   75550 main.go:141] libmachine: (bridge-024908) DBG | I0919 23:36:41.453668   75578 retry.go:31] will retry after 284.249946ms: waiting for domain to come up
	I0919 23:36:41.740563   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:36:41.741701   75550 main.go:141] libmachine: (bridge-024908) DBG | no network interface addresses found for domain bridge-024908 (source=lease)
	I0919 23:36:41.741977   75550 main.go:141] libmachine: (bridge-024908) DBG | trying to list again with source=arp
	I0919 23:36:41.742448   75550 main.go:141] libmachine: (bridge-024908) DBG | unable to find current IP address of domain bridge-024908 in network mk-bridge-024908 (interfaces detected: [])
	I0919 23:36:41.742684   75550 main.go:141] libmachine: (bridge-024908) DBG | I0919 23:36:41.742535   75578 retry.go:31] will retry after 320.73485ms: waiting for domain to come up
	I0919 23:36:42.065165   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:36:42.066132   75550 main.go:141] libmachine: (bridge-024908) DBG | no network interface addresses found for domain bridge-024908 (source=lease)
	I0919 23:36:42.066156   75550 main.go:141] libmachine: (bridge-024908) DBG | trying to list again with source=arp
	I0919 23:36:42.066643   75550 main.go:141] libmachine: (bridge-024908) DBG | unable to find current IP address of domain bridge-024908 in network mk-bridge-024908 (interfaces detected: [])
	I0919 23:36:42.066683   75550 main.go:141] libmachine: (bridge-024908) DBG | I0919 23:36:42.066620   75578 retry.go:31] will retry after 403.91255ms: waiting for domain to come up
	I0919 23:36:42.472445   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:36:42.473224   75550 main.go:141] libmachine: (bridge-024908) DBG | no network interface addresses found for domain bridge-024908 (source=lease)
	I0919 23:36:42.473274   75550 main.go:141] libmachine: (bridge-024908) DBG | trying to list again with source=arp
	I0919 23:36:42.473707   75550 main.go:141] libmachine: (bridge-024908) DBG | unable to find current IP address of domain bridge-024908 in network mk-bridge-024908 (interfaces detected: [])
	I0919 23:36:42.473759   75550 main.go:141] libmachine: (bridge-024908) DBG | I0919 23:36:42.473659   75578 retry.go:31] will retry after 562.979109ms: waiting for domain to come up
	I0919 23:36:43.038837   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:36:43.039974   75550 main.go:141] libmachine: (bridge-024908) DBG | no network interface addresses found for domain bridge-024908 (source=lease)
	I0919 23:36:43.040083   75550 main.go:141] libmachine: (bridge-024908) DBG | trying to list again with source=arp
	I0919 23:36:43.040529   75550 main.go:141] libmachine: (bridge-024908) DBG | unable to find current IP address of domain bridge-024908 in network mk-bridge-024908 (interfaces detected: [])
	I0919 23:36:43.040566   75550 main.go:141] libmachine: (bridge-024908) DBG | I0919 23:36:43.040524   75578 retry.go:31] will retry after 888.081744ms: waiting for domain to come up
	I0919 23:36:40.122373   73258 system_pods.go:86] 7 kube-system pods found
	I0919 23:36:40.122419   73258 system_pods.go:89] "coredns-66bc5c9577-6ff4s" [bfeeb5e4-29d3-4cac-a8d8-f50cea911379] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:36:40.122428   73258 system_pods.go:89] "etcd-flannel-024908" [750b89e8-a3d5-4298-8de2-0f8813e3f04f] Running
	I0919 23:36:40.122438   73258 system_pods.go:89] "kube-apiserver-flannel-024908" [6fd5e3be-9011-4355-ac6e-edce6a72430c] Running
	I0919 23:36:40.122444   73258 system_pods.go:89] "kube-controller-manager-flannel-024908" [991db4b7-f11c-492c-bb87-2291620f98c3] Running
	I0919 23:36:40.122453   73258 system_pods.go:89] "kube-proxy-5ch96" [1e8a306d-3338-4763-a77f-02cdca9beefa] Running
	I0919 23:36:40.122458   73258 system_pods.go:89] "kube-scheduler-flannel-024908" [4bde65b7-9774-486e-b36b-fd4f0f4ad913] Running
	I0919 23:36:40.122464   73258 system_pods.go:89] "storage-provisioner" [8e0b0c57-dc02-42f5-a76e-71f66be07956] Running
	I0919 23:36:40.122482   73258 retry.go:31] will retry after 1.467659498s: missing components: kube-dns
	I0919 23:36:41.596933   73258 system_pods.go:86] 7 kube-system pods found
	I0919 23:36:41.596978   73258 system_pods.go:89] "coredns-66bc5c9577-6ff4s" [bfeeb5e4-29d3-4cac-a8d8-f50cea911379] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:36:41.596987   73258 system_pods.go:89] "etcd-flannel-024908" [750b89e8-a3d5-4298-8de2-0f8813e3f04f] Running
	I0919 23:36:41.596996   73258 system_pods.go:89] "kube-apiserver-flannel-024908" [6fd5e3be-9011-4355-ac6e-edce6a72430c] Running
	I0919 23:36:41.597003   73258 system_pods.go:89] "kube-controller-manager-flannel-024908" [991db4b7-f11c-492c-bb87-2291620f98c3] Running
	I0919 23:36:41.597008   73258 system_pods.go:89] "kube-proxy-5ch96" [1e8a306d-3338-4763-a77f-02cdca9beefa] Running
	I0919 23:36:41.597014   73258 system_pods.go:89] "kube-scheduler-flannel-024908" [4bde65b7-9774-486e-b36b-fd4f0f4ad913] Running
	I0919 23:36:41.597020   73258 system_pods.go:89] "storage-provisioner" [8e0b0c57-dc02-42f5-a76e-71f66be07956] Running
	I0919 23:36:41.597039   73258 retry.go:31] will retry after 1.787145551s: missing components: kube-dns
	I0919 23:36:43.391350   73258 system_pods.go:86] 7 kube-system pods found
	I0919 23:36:43.391395   73258 system_pods.go:89] "coredns-66bc5c9577-6ff4s" [bfeeb5e4-29d3-4cac-a8d8-f50cea911379] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:36:43.391406   73258 system_pods.go:89] "etcd-flannel-024908" [750b89e8-a3d5-4298-8de2-0f8813e3f04f] Running
	I0919 23:36:43.391414   73258 system_pods.go:89] "kube-apiserver-flannel-024908" [6fd5e3be-9011-4355-ac6e-edce6a72430c] Running
	I0919 23:36:43.391420   73258 system_pods.go:89] "kube-controller-manager-flannel-024908" [991db4b7-f11c-492c-bb87-2291620f98c3] Running
	I0919 23:36:43.391427   73258 system_pods.go:89] "kube-proxy-5ch96" [1e8a306d-3338-4763-a77f-02cdca9beefa] Running
	I0919 23:36:43.391432   73258 system_pods.go:89] "kube-scheduler-flannel-024908" [4bde65b7-9774-486e-b36b-fd4f0f4ad913] Running
	I0919 23:36:43.391437   73258 system_pods.go:89] "storage-provisioner" [8e0b0c57-dc02-42f5-a76e-71f66be07956] Running
	I0919 23:36:43.391457   73258 retry.go:31] will retry after 2.32094539s: missing components: kube-dns
	W0919 23:36:42.610833   73436 pod_ready.go:104] pod "coredns-66bc5c9577-qxgj9" is not "Ready", error: <nil>
	I0919 23:36:43.605311   73436 pod_ready.go:94] pod "coredns-66bc5c9577-qxgj9" is "Ready"
	I0919 23:36:43.605355   73436 pod_ready.go:86] duration metric: took 3.0116459s for pod "coredns-66bc5c9577-qxgj9" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:36:43.610579   73436 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-304197" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:36:45.119780   73436 pod_ready.go:94] pod "etcd-default-k8s-diff-port-304197" is "Ready"
	I0919 23:36:45.119810   73436 pod_ready.go:86] duration metric: took 1.509196741s for pod "etcd-default-k8s-diff-port-304197" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:36:45.123556   73436 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-304197" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:36:45.135451   73436 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-304197" is "Ready"
	I0919 23:36:45.135483   73436 pod_ready.go:86] duration metric: took 11.885835ms for pod "kube-apiserver-default-k8s-diff-port-304197" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:36:45.141571   73436 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-304197" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:36:45.148179   73436 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-304197" is "Ready"
	I0919 23:36:45.148214   73436 pod_ready.go:86] duration metric: took 6.609569ms for pod "kube-controller-manager-default-k8s-diff-port-304197" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:36:45.199272   73436 pod_ready.go:83] waiting for pod "kube-proxy-hr2bk" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:36:45.602963   73436 pod_ready.go:94] pod "kube-proxy-hr2bk" is "Ready"
	I0919 23:36:45.602991   73436 pod_ready.go:86] duration metric: took 403.695732ms for pod "kube-proxy-hr2bk" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:36:45.799122   73436 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-304197" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:36:46.199651   73436 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-304197" is "Ready"
	I0919 23:36:46.199686   73436 pod_ready.go:86] duration metric: took 400.535111ms for pod "kube-scheduler-default-k8s-diff-port-304197" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:36:46.199701   73436 pod_ready.go:40] duration metric: took 5.620390024s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 23:36:46.249045   73436 start.go:617] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0919 23:36:46.250918   73436 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-304197" cluster and "default" namespace by default
	I0919 23:36:43.929924   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:36:43.930760   75550 main.go:141] libmachine: (bridge-024908) DBG | no network interface addresses found for domain bridge-024908 (source=lease)
	I0919 23:36:43.930790   75550 main.go:141] libmachine: (bridge-024908) DBG | trying to list again with source=arp
	I0919 23:36:43.931162   75550 main.go:141] libmachine: (bridge-024908) DBG | unable to find current IP address of domain bridge-024908 in network mk-bridge-024908 (interfaces detected: [])
	I0919 23:36:43.931184   75550 main.go:141] libmachine: (bridge-024908) DBG | I0919 23:36:43.931148   75578 retry.go:31] will retry after 1.15149481s: waiting for domain to come up
	I0919 23:36:45.084216   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:36:45.085010   75550 main.go:141] libmachine: (bridge-024908) DBG | no network interface addresses found for domain bridge-024908 (source=lease)
	I0919 23:36:45.085040   75550 main.go:141] libmachine: (bridge-024908) DBG | trying to list again with source=arp
	I0919 23:36:45.085380   75550 main.go:141] libmachine: (bridge-024908) DBG | unable to find current IP address of domain bridge-024908 in network mk-bridge-024908 (interfaces detected: [])
	I0919 23:36:45.085401   75550 main.go:141] libmachine: (bridge-024908) DBG | I0919 23:36:45.085359   75578 retry.go:31] will retry after 1.310420989s: waiting for domain to come up
	I0919 23:36:46.399094   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:36:46.399967   75550 main.go:141] libmachine: (bridge-024908) DBG | no network interface addresses found for domain bridge-024908 (source=lease)
	I0919 23:36:46.399985   75550 main.go:141] libmachine: (bridge-024908) DBG | trying to list again with source=arp
	I0919 23:36:46.400409   75550 main.go:141] libmachine: (bridge-024908) DBG | unable to find current IP address of domain bridge-024908 in network mk-bridge-024908 (interfaces detected: [])
	I0919 23:36:46.400460   75550 main.go:141] libmachine: (bridge-024908) DBG | I0919 23:36:46.400397   75578 retry.go:31] will retry after 1.537684727s: waiting for domain to come up
	I0919 23:36:47.939978   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:36:47.940713   75550 main.go:141] libmachine: (bridge-024908) DBG | no network interface addresses found for domain bridge-024908 (source=lease)
	I0919 23:36:47.940746   75550 main.go:141] libmachine: (bridge-024908) DBG | trying to list again with source=arp
	I0919 23:36:47.941190   75550 main.go:141] libmachine: (bridge-024908) DBG | unable to find current IP address of domain bridge-024908 in network mk-bridge-024908 (interfaces detected: [])
	I0919 23:36:47.941246   75550 main.go:141] libmachine: (bridge-024908) DBG | I0919 23:36:47.941179   75578 retry.go:31] will retry after 2.173582548s: waiting for domain to come up
	I0919 23:36:45.718271   73258 system_pods.go:86] 7 kube-system pods found
	I0919 23:36:45.718306   73258 system_pods.go:89] "coredns-66bc5c9577-6ff4s" [bfeeb5e4-29d3-4cac-a8d8-f50cea911379] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:36:45.718315   73258 system_pods.go:89] "etcd-flannel-024908" [750b89e8-a3d5-4298-8de2-0f8813e3f04f] Running
	I0919 23:36:45.718323   73258 system_pods.go:89] "kube-apiserver-flannel-024908" [6fd5e3be-9011-4355-ac6e-edce6a72430c] Running
	I0919 23:36:45.718329   73258 system_pods.go:89] "kube-controller-manager-flannel-024908" [991db4b7-f11c-492c-bb87-2291620f98c3] Running
	I0919 23:36:45.718334   73258 system_pods.go:89] "kube-proxy-5ch96" [1e8a306d-3338-4763-a77f-02cdca9beefa] Running
	I0919 23:36:45.718339   73258 system_pods.go:89] "kube-scheduler-flannel-024908" [4bde65b7-9774-486e-b36b-fd4f0f4ad913] Running
	I0919 23:36:45.718350   73258 system_pods.go:89] "storage-provisioner" [8e0b0c57-dc02-42f5-a76e-71f66be07956] Running
	I0919 23:36:45.718369   73258 retry.go:31] will retry after 2.363488525s: missing components: kube-dns
	I0919 23:36:48.088981   73258 system_pods.go:86] 7 kube-system pods found
	I0919 23:36:48.089014   73258 system_pods.go:89] "coredns-66bc5c9577-6ff4s" [bfeeb5e4-29d3-4cac-a8d8-f50cea911379] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:36:48.089020   73258 system_pods.go:89] "etcd-flannel-024908" [750b89e8-a3d5-4298-8de2-0f8813e3f04f] Running
	I0919 23:36:48.089033   73258 system_pods.go:89] "kube-apiserver-flannel-024908" [6fd5e3be-9011-4355-ac6e-edce6a72430c] Running
	I0919 23:36:48.089036   73258 system_pods.go:89] "kube-controller-manager-flannel-024908" [991db4b7-f11c-492c-bb87-2291620f98c3] Running
	I0919 23:36:48.089041   73258 system_pods.go:89] "kube-proxy-5ch96" [1e8a306d-3338-4763-a77f-02cdca9beefa] Running
	I0919 23:36:48.089044   73258 system_pods.go:89] "kube-scheduler-flannel-024908" [4bde65b7-9774-486e-b36b-fd4f0f4ad913] Running
	I0919 23:36:48.089047   73258 system_pods.go:89] "storage-provisioner" [8e0b0c57-dc02-42f5-a76e-71f66be07956] Running
	I0919 23:36:48.089056   73258 system_pods.go:126] duration metric: took 12.101602867s to wait for k8s-apps to be running ...
	I0919 23:36:48.089063   73258 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 23:36:48.089111   73258 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 23:36:48.111527   73258 system_svc.go:56] duration metric: took 22.453524ms WaitForService to wait for kubelet
	I0919 23:36:48.111558   73258 kubeadm.go:578] duration metric: took 19.314850114s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 23:36:48.111576   73258 node_conditions.go:102] verifying NodePressure condition ...
	I0919 23:36:48.114690   73258 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0919 23:36:48.114739   73258 node_conditions.go:123] node cpu capacity is 2
	I0919 23:36:48.114757   73258 node_conditions.go:105] duration metric: took 3.175386ms to run NodePressure ...
	I0919 23:36:48.114771   73258 start.go:241] waiting for startup goroutines ...
	I0919 23:36:48.114780   73258 start.go:246] waiting for cluster config update ...
	I0919 23:36:48.114795   73258 start.go:255] writing updated cluster config ...
	I0919 23:36:48.158441   73258 ssh_runner.go:195] Run: rm -f paused
	I0919 23:36:48.165144   73258 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 23:36:48.169842   73258 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6ff4s" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:36:48.677739   73258 pod_ready.go:94] pod "coredns-66bc5c9577-6ff4s" is "Ready"
	I0919 23:36:48.677769   73258 pod_ready.go:86] duration metric: took 507.901936ms for pod "coredns-66bc5c9577-6ff4s" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:36:48.680484   73258 pod_ready.go:83] waiting for pod "etcd-flannel-024908" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:36:48.690907   73258 pod_ready.go:94] pod "etcd-flannel-024908" is "Ready"
	I0919 23:36:48.690940   73258 pod_ready.go:86] duration metric: took 10.417306ms for pod "etcd-flannel-024908" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:36:48.782360   73258 pod_ready.go:83] waiting for pod "kube-apiserver-flannel-024908" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:36:48.791110   73258 pod_ready.go:94] pod "kube-apiserver-flannel-024908" is "Ready"
	I0919 23:36:48.791134   73258 pod_ready.go:86] duration metric: took 8.752412ms for pod "kube-apiserver-flannel-024908" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:36:48.794616   73258 pod_ready.go:83] waiting for pod "kube-controller-manager-flannel-024908" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:36:48.970689   73258 pod_ready.go:94] pod "kube-controller-manager-flannel-024908" is "Ready"
	I0919 23:36:48.970713   73258 pod_ready.go:86] duration metric: took 176.076221ms for pod "kube-controller-manager-flannel-024908" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:36:49.172907   73258 pod_ready.go:83] waiting for pod "kube-proxy-5ch96" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:36:49.573296   73258 pod_ready.go:94] pod "kube-proxy-5ch96" is "Ready"
	I0919 23:36:49.573339   73258 pod_ready.go:86] duration metric: took 400.39909ms for pod "kube-proxy-5ch96" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:36:49.770664   73258 pod_ready.go:83] waiting for pod "kube-scheduler-flannel-024908" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:36:50.170961   73258 pod_ready.go:94] pod "kube-scheduler-flannel-024908" is "Ready"
	I0919 23:36:50.170994   73258 pod_ready.go:86] duration metric: took 400.305782ms for pod "kube-scheduler-flannel-024908" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:36:50.171008   73258 pod_ready.go:40] duration metric: took 2.005821553s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 23:36:50.229754   73258 start.go:617] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0919 23:36:50.232046   73258 out.go:179] * Done! kubectl is now configured to use "flannel-024908" cluster and "default" namespace by default
	I0919 23:36:50.117153   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:36:50.118036   75550 main.go:141] libmachine: (bridge-024908) DBG | no network interface addresses found for domain bridge-024908 (source=lease)
	I0919 23:36:50.118062   75550 main.go:141] libmachine: (bridge-024908) DBG | trying to list again with source=arp
	I0919 23:36:50.118448   75550 main.go:141] libmachine: (bridge-024908) DBG | unable to find current IP address of domain bridge-024908 in network mk-bridge-024908 (interfaces detected: [])
	I0919 23:36:50.118492   75550 main.go:141] libmachine: (bridge-024908) DBG | I0919 23:36:50.118415   75578 retry.go:31] will retry after 2.881257511s: waiting for domain to come up
	I0919 23:36:53.003017   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:36:53.003747   75550 main.go:141] libmachine: (bridge-024908) DBG | no network interface addresses found for domain bridge-024908 (source=lease)
	I0919 23:36:53.003776   75550 main.go:141] libmachine: (bridge-024908) DBG | trying to list again with source=arp
	I0919 23:36:53.004219   75550 main.go:141] libmachine: (bridge-024908) DBG | unable to find current IP address of domain bridge-024908 in network mk-bridge-024908 (interfaces detected: [])
	I0919 23:36:53.004295   75550 main.go:141] libmachine: (bridge-024908) DBG | I0919 23:36:53.004201   75578 retry.go:31] will retry after 2.53385353s: waiting for domain to come up
	I0919 23:36:55.540218   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:36:55.540965   75550 main.go:141] libmachine: (bridge-024908) DBG | no network interface addresses found for domain bridge-024908 (source=lease)
	I0919 23:36:55.540991   75550 main.go:141] libmachine: (bridge-024908) DBG | trying to list again with source=arp
	I0919 23:36:55.541332   75550 main.go:141] libmachine: (bridge-024908) DBG | unable to find current IP address of domain bridge-024908 in network mk-bridge-024908 (interfaces detected: [])
	I0919 23:36:55.541356   75550 main.go:141] libmachine: (bridge-024908) DBG | I0919 23:36:55.541296   75578 retry.go:31] will retry after 3.231060911s: waiting for domain to come up
	I0919 23:36:58.774245   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:36:58.775228   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has current primary IP address 192.168.61.181 and MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:36:58.775256   75550 main.go:141] libmachine: (bridge-024908) found domain IP: 192.168.61.181
	I0919 23:36:58.775268   75550 main.go:141] libmachine: (bridge-024908) reserving static IP address...
	I0919 23:36:58.775750   75550 main.go:141] libmachine: (bridge-024908) DBG | unable to find host DHCP lease matching {name: "bridge-024908", mac: "52:54:00:6c:4a:f7", ip: "192.168.61.181"} in network mk-bridge-024908
	I0919 23:36:59.018702   75550 main.go:141] libmachine: (bridge-024908) reserved static IP address 192.168.61.181 for domain bridge-024908
	I0919 23:36:59.018768   75550 main.go:141] libmachine: (bridge-024908) waiting for SSH...
	I0919 23:36:59.018785   75550 main.go:141] libmachine: (bridge-024908) DBG | Getting to WaitForSSH function...
	I0919 23:36:59.023380   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:36:59.023934   75550 main.go:141] libmachine: (bridge-024908) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:4a:f7", ip: ""} in network mk-bridge-024908: {Iface:virbr3 ExpiryTime:2025-09-20 00:36:56 +0000 UTC Type:0 Mac:52:54:00:6c:4a:f7 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:minikube Clientid:01:52:54:00:6c:4a:f7}
	I0919 23:36:59.023964   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined IP address 192.168.61.181 and MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:36:59.024206   75550 main.go:141] libmachine: (bridge-024908) DBG | Using SSH client type: external
	I0919 23:36:59.024232   75550 main.go:141] libmachine: (bridge-024908) DBG | Using SSH private key: /home/jenkins/minikube-integration/21594-14764/.minikube/machines/bridge-024908/id_rsa (-rw-------)
	I0919 23:36:59.024271   75550 main.go:141] libmachine: (bridge-024908) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.181 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21594-14764/.minikube/machines/bridge-024908/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0919 23:36:59.024282   75550 main.go:141] libmachine: (bridge-024908) DBG | About to run SSH command:
	I0919 23:36:59.024295   75550 main.go:141] libmachine: (bridge-024908) DBG | exit 0
	I0919 23:36:59.165433   75550 main.go:141] libmachine: (bridge-024908) DBG | SSH cmd err, output: <nil>: 
	I0919 23:36:59.165781   75550 main.go:141] libmachine: (bridge-024908) domain creation complete
	I0919 23:36:59.166275   75550 main.go:141] libmachine: (bridge-024908) Calling .GetConfigRaw
	I0919 23:36:59.167044   75550 main.go:141] libmachine: (bridge-024908) Calling .DriverName
	I0919 23:36:59.167296   75550 main.go:141] libmachine: (bridge-024908) Calling .DriverName
	I0919 23:36:59.167493   75550 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0919 23:36:59.167510   75550 main.go:141] libmachine: (bridge-024908) Calling .GetState
	I0919 23:36:59.170476   75550 main.go:141] libmachine: Detecting operating system of created instance...
	I0919 23:36:59.170500   75550 main.go:141] libmachine: Waiting for SSH to be available...
	I0919 23:36:59.170508   75550 main.go:141] libmachine: Getting to WaitForSSH function...
	I0919 23:36:59.170517   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHHostname
	I0919 23:36:59.173871   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:36:59.174509   75550 main.go:141] libmachine: (bridge-024908) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:4a:f7", ip: ""} in network mk-bridge-024908: {Iface:virbr3 ExpiryTime:2025-09-20 00:36:56 +0000 UTC Type:0 Mac:52:54:00:6c:4a:f7 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:bridge-024908 Clientid:01:52:54:00:6c:4a:f7}
	I0919 23:36:59.174616   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined IP address 192.168.61.181 and MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:36:59.174993   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHPort
	I0919 23:36:59.175250   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHKeyPath
	I0919 23:36:59.175448   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHKeyPath
	I0919 23:36:59.175656   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHUsername
	I0919 23:36:59.175870   75550 main.go:141] libmachine: Using SSH client type: native
	I0919 23:36:59.176170   75550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.61.181 22 <nil> <nil>}
	I0919 23:36:59.176180   75550 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0919 23:36:59.299849   75550 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 23:36:59.299880   75550 main.go:141] libmachine: Detecting the provisioner...
	I0919 23:36:59.299891   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHHostname
	I0919 23:36:59.303824   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:36:59.304279   75550 main.go:141] libmachine: (bridge-024908) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:4a:f7", ip: ""} in network mk-bridge-024908: {Iface:virbr3 ExpiryTime:2025-09-20 00:36:56 +0000 UTC Type:0 Mac:52:54:00:6c:4a:f7 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:bridge-024908 Clientid:01:52:54:00:6c:4a:f7}
	I0919 23:36:59.304319   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined IP address 192.168.61.181 and MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:36:59.304599   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHPort
	I0919 23:36:59.304848   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHKeyPath
	I0919 23:36:59.305034   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHKeyPath
	I0919 23:36:59.305194   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHUsername
	I0919 23:36:59.305431   75550 main.go:141] libmachine: Using SSH client type: native
	I0919 23:36:59.305642   75550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.61.181 22 <nil> <nil>}
	I0919 23:36:59.305662   75550 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0919 23:36:59.430465   75550 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I0919 23:36:59.430538   75550 main.go:141] libmachine: found compatible host: buildroot
	I0919 23:36:59.430548   75550 main.go:141] libmachine: Provisioning with buildroot...
	I0919 23:36:59.430558   75550 main.go:141] libmachine: (bridge-024908) Calling .GetMachineName
	I0919 23:36:59.430861   75550 buildroot.go:166] provisioning hostname "bridge-024908"
	I0919 23:36:59.430887   75550 main.go:141] libmachine: (bridge-024908) Calling .GetMachineName
	I0919 23:36:59.431096   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHHostname
	I0919 23:36:59.434754   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:36:59.435320   75550 main.go:141] libmachine: (bridge-024908) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:4a:f7", ip: ""} in network mk-bridge-024908: {Iface:virbr3 ExpiryTime:2025-09-20 00:36:56 +0000 UTC Type:0 Mac:52:54:00:6c:4a:f7 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:bridge-024908 Clientid:01:52:54:00:6c:4a:f7}
	I0919 23:36:59.435359   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined IP address 192.168.61.181 and MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:36:59.435582   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHPort
	I0919 23:36:59.435777   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHKeyPath
	I0919 23:36:59.435970   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHKeyPath
	I0919 23:36:59.436113   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHUsername
	I0919 23:36:59.436259   75550 main.go:141] libmachine: Using SSH client type: native
	I0919 23:36:59.436541   75550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.61.181 22 <nil> <nil>}
	I0919 23:36:59.436561   75550 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-024908 && echo "bridge-024908" | sudo tee /etc/hostname
	I0919 23:36:59.578445   75550 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-024908
	
	I0919 23:36:59.578478   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHHostname
	I0919 23:36:59.582245   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:36:59.582752   75550 main.go:141] libmachine: (bridge-024908) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:4a:f7", ip: ""} in network mk-bridge-024908: {Iface:virbr3 ExpiryTime:2025-09-20 00:36:56 +0000 UTC Type:0 Mac:52:54:00:6c:4a:f7 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:bridge-024908 Clientid:01:52:54:00:6c:4a:f7}
	I0919 23:36:59.582781   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined IP address 192.168.61.181 and MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:36:59.583050   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHPort
	I0919 23:36:59.583226   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHKeyPath
	I0919 23:36:59.583431   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHKeyPath
	I0919 23:36:59.583588   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHUsername
	I0919 23:36:59.583813   75550 main.go:141] libmachine: Using SSH client type: native
	I0919 23:36:59.584091   75550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.61.181 22 <nil> <nil>}
	I0919 23:36:59.584117   75550 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-024908' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-024908/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-024908' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 23:36:59.725722   75550 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 23:36:59.725771   75550 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14764/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14764/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14764/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14764/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14764/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14764/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14764/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14764/.minikube}
	I0919 23:36:59.725795   75550 buildroot.go:174] setting up certificates
	I0919 23:36:59.725806   75550 provision.go:84] configureAuth start
	I0919 23:36:59.725818   75550 main.go:141] libmachine: (bridge-024908) Calling .GetMachineName
	I0919 23:36:59.726137   75550 main.go:141] libmachine: (bridge-024908) Calling .GetIP
	I0919 23:36:59.729658   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:36:59.730174   75550 main.go:141] libmachine: (bridge-024908) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:4a:f7", ip: ""} in network mk-bridge-024908: {Iface:virbr3 ExpiryTime:2025-09-20 00:36:56 +0000 UTC Type:0 Mac:52:54:00:6c:4a:f7 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:bridge-024908 Clientid:01:52:54:00:6c:4a:f7}
	I0919 23:36:59.730213   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined IP address 192.168.61.181 and MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:36:59.730440   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHHostname
	I0919 23:36:59.734183   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:36:59.734769   75550 main.go:141] libmachine: (bridge-024908) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:4a:f7", ip: ""} in network mk-bridge-024908: {Iface:virbr3 ExpiryTime:2025-09-20 00:36:56 +0000 UTC Type:0 Mac:52:54:00:6c:4a:f7 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:bridge-024908 Clientid:01:52:54:00:6c:4a:f7}
	I0919 23:36:59.734800   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined IP address 192.168.61.181 and MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:36:59.735045   75550 provision.go:143] copyHostCerts
	I0919 23:36:59.735114   75550 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14764/.minikube/key.pem, removing ...
	I0919 23:36:59.735128   75550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14764/.minikube/key.pem
	I0919 23:36:59.735221   75550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14764/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14764/.minikube/key.pem (1679 bytes)
	I0919 23:36:59.735351   75550 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14764/.minikube/ca.pem, removing ...
	I0919 23:36:59.735362   75550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14764/.minikube/ca.pem
	I0919 23:36:59.735405   75550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14764/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14764/.minikube/ca.pem (1082 bytes)
	I0919 23:36:59.735510   75550 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14764/.minikube/cert.pem, removing ...
	I0919 23:36:59.735521   75550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14764/.minikube/cert.pem
	I0919 23:36:59.735558   75550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14764/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14764/.minikube/cert.pem (1123 bytes)
	I0919 23:36:59.735633   75550 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14764/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14764/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14764/.minikube/certs/ca-key.pem org=jenkins.bridge-024908 san=[127.0.0.1 192.168.61.181 bridge-024908 localhost minikube]
	I0919 23:36:59.911374   75550 provision.go:177] copyRemoteCerts
	I0919 23:36:59.911428   75550 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 23:36:59.911451   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHHostname
	I0919 23:36:59.915638   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:36:59.916159   75550 main.go:141] libmachine: (bridge-024908) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:4a:f7", ip: ""} in network mk-bridge-024908: {Iface:virbr3 ExpiryTime:2025-09-20 00:36:56 +0000 UTC Type:0 Mac:52:54:00:6c:4a:f7 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:bridge-024908 Clientid:01:52:54:00:6c:4a:f7}
	I0919 23:36:59.916207   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined IP address 192.168.61.181 and MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:36:59.916473   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHPort
	I0919 23:36:59.916749   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHKeyPath
	I0919 23:36:59.916970   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHUsername
	I0919 23:36:59.917121   75550 sshutil.go:53] new ssh client: &{IP:192.168.61.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21594-14764/.minikube/machines/bridge-024908/id_rsa Username:docker}
	I0919 23:37:00.016922   75550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14764/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 23:37:00.060261   75550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14764/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 23:37:00.103028   75550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14764/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 23:37:00.162378   75550 provision.go:87] duration metric: took 436.56023ms to configureAuth
	I0919 23:37:00.162410   75550 buildroot.go:189] setting minikube options for container-runtime
	I0919 23:37:00.162585   75550 config.go:182] Loaded profile config "bridge-024908": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 23:37:00.162667   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHHostname
	I0919 23:37:00.166952   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:37:00.167481   75550 main.go:141] libmachine: (bridge-024908) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:4a:f7", ip: ""} in network mk-bridge-024908: {Iface:virbr3 ExpiryTime:2025-09-20 00:36:56 +0000 UTC Type:0 Mac:52:54:00:6c:4a:f7 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:bridge-024908 Clientid:01:52:54:00:6c:4a:f7}
	I0919 23:37:00.167509   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined IP address 192.168.61.181 and MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:37:00.167805   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHPort
	I0919 23:37:00.167993   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHKeyPath
	I0919 23:37:00.168148   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHKeyPath
	I0919 23:37:00.168305   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHUsername
	I0919 23:37:00.168814   75550 main.go:141] libmachine: Using SSH client type: native
	I0919 23:37:00.169114   75550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.61.181 22 <nil> <nil>}
	I0919 23:37:00.169139   75550 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 23:37:00.470740   75550 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 23:37:00.470775   75550 main.go:141] libmachine: Checking connection to Docker...
	I0919 23:37:00.470785   75550 main.go:141] libmachine: (bridge-024908) Calling .GetURL
	I0919 23:37:00.473613   75550 main.go:141] libmachine: (bridge-024908) DBG | using libvirt version 8000000
	I0919 23:37:00.477468   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:37:00.477953   75550 main.go:141] libmachine: (bridge-024908) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:4a:f7", ip: ""} in network mk-bridge-024908: {Iface:virbr3 ExpiryTime:2025-09-20 00:36:56 +0000 UTC Type:0 Mac:52:54:00:6c:4a:f7 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:bridge-024908 Clientid:01:52:54:00:6c:4a:f7}
	I0919 23:37:00.477982   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined IP address 192.168.61.181 and MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:37:00.478195   75550 main.go:141] libmachine: Docker is up and running!
	I0919 23:37:00.478215   75550 main.go:141] libmachine: Reticulating splines...
	I0919 23:37:00.478223   75550 client.go:171] duration metric: took 21.827694024s to LocalClient.Create
	I0919 23:37:00.478248   75550 start.go:167] duration metric: took 21.827760989s to libmachine.API.Create "bridge-024908"
	I0919 23:37:00.478261   75550 start.go:293] postStartSetup for "bridge-024908" (driver="kvm2")
	I0919 23:37:00.478273   75550 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 23:37:00.478295   75550 main.go:141] libmachine: (bridge-024908) Calling .DriverName
	I0919 23:37:00.478535   75550 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 23:37:00.478571   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHHostname
	I0919 23:37:00.481680   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:37:00.482153   75550 main.go:141] libmachine: (bridge-024908) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:4a:f7", ip: ""} in network mk-bridge-024908: {Iface:virbr3 ExpiryTime:2025-09-20 00:36:56 +0000 UTC Type:0 Mac:52:54:00:6c:4a:f7 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:bridge-024908 Clientid:01:52:54:00:6c:4a:f7}
	I0919 23:37:00.482184   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined IP address 192.168.61.181 and MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:37:00.482442   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHPort
	I0919 23:37:00.482645   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHKeyPath
	I0919 23:37:00.482831   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHUsername
	I0919 23:37:00.483038   75550 sshutil.go:53] new ssh client: &{IP:192.168.61.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21594-14764/.minikube/machines/bridge-024908/id_rsa Username:docker}
	I0919 23:37:00.576109   75550 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 23:37:00.582200   75550 info.go:137] Remote host: Buildroot 2025.02
	I0919 23:37:00.582243   75550 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14764/.minikube/addons for local assets ...
	I0919 23:37:00.582311   75550 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14764/.minikube/files for local assets ...
	I0919 23:37:00.582384   75550 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14764/.minikube/files/etc/ssl/certs/186712.pem -> 186712.pem in /etc/ssl/certs
	I0919 23:37:00.582478   75550 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 23:37:00.597543   75550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14764/.minikube/files/etc/ssl/certs/186712.pem --> /etc/ssl/certs/186712.pem (1708 bytes)
	I0919 23:37:00.633349   75550 start.go:296] duration metric: took 155.074901ms for postStartSetup
	I0919 23:37:00.633395   75550 main.go:141] libmachine: (bridge-024908) Calling .GetConfigRaw
	I0919 23:37:00.634038   75550 main.go:141] libmachine: (bridge-024908) Calling .GetIP
	I0919 23:37:00.637262   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:37:00.637698   75550 main.go:141] libmachine: (bridge-024908) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:4a:f7", ip: ""} in network mk-bridge-024908: {Iface:virbr3 ExpiryTime:2025-09-20 00:36:56 +0000 UTC Type:0 Mac:52:54:00:6c:4a:f7 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:bridge-024908 Clientid:01:52:54:00:6c:4a:f7}
	I0919 23:37:00.637735   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined IP address 192.168.61.181 and MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:37:00.638136   75550 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908/config.json ...
	I0919 23:37:00.638425   75550 start.go:128] duration metric: took 22.007596337s to createHost
	I0919 23:37:00.638457   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHHostname
	I0919 23:37:00.641395   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:37:00.641854   75550 main.go:141] libmachine: (bridge-024908) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:4a:f7", ip: ""} in network mk-bridge-024908: {Iface:virbr3 ExpiryTime:2025-09-20 00:36:56 +0000 UTC Type:0 Mac:52:54:00:6c:4a:f7 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:bridge-024908 Clientid:01:52:54:00:6c:4a:f7}
	I0919 23:37:00.641897   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined IP address 192.168.61.181 and MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:37:00.642095   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHPort
	I0919 23:37:00.642284   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHKeyPath
	I0919 23:37:00.642435   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHKeyPath
	I0919 23:37:00.642641   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHUsername
	I0919 23:37:00.642868   75550 main.go:141] libmachine: Using SSH client type: native
	I0919 23:37:00.643171   75550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 192.168.61.181 22 <nil> <nil>}
	I0919 23:37:00.643190   75550 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0919 23:37:00.768225   75550 main.go:141] libmachine: SSH cmd err, output: <nil>: 1758325020.739336391
	
	I0919 23:37:00.768252   75550 fix.go:216] guest clock: 1758325020.739336391
	I0919 23:37:00.768263   75550 fix.go:229] Guest: 2025-09-19 23:37:00.739336391 +0000 UTC Remote: 2025-09-19 23:37:00.638441688 +0000 UTC m=+22.170446021 (delta=100.894703ms)
	I0919 23:37:00.768291   75550 fix.go:200] guest clock delta is within tolerance: 100.894703ms
	I0919 23:37:00.768299   75550 start.go:83] releasing machines lock for "bridge-024908", held for 22.137606708s
	I0919 23:37:00.768331   75550 main.go:141] libmachine: (bridge-024908) Calling .DriverName
	I0919 23:37:00.768592   75550 main.go:141] libmachine: (bridge-024908) Calling .GetIP
	I0919 23:37:00.773166   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:37:00.773711   75550 main.go:141] libmachine: (bridge-024908) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:4a:f7", ip: ""} in network mk-bridge-024908: {Iface:virbr3 ExpiryTime:2025-09-20 00:36:56 +0000 UTC Type:0 Mac:52:54:00:6c:4a:f7 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:bridge-024908 Clientid:01:52:54:00:6c:4a:f7}
	I0919 23:37:00.773765   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined IP address 192.168.61.181 and MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:37:00.774130   75550 main.go:141] libmachine: (bridge-024908) Calling .DriverName
	I0919 23:37:00.774996   75550 main.go:141] libmachine: (bridge-024908) Calling .DriverName
	I0919 23:37:00.775212   75550 main.go:141] libmachine: (bridge-024908) Calling .DriverName
	I0919 23:37:00.775317   75550 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 23:37:00.775371   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHHostname
	I0919 23:37:00.775710   75550 ssh_runner.go:195] Run: cat /version.json
	I0919 23:37:00.775759   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHHostname
	I0919 23:37:00.780875   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:37:00.782097   75550 main.go:141] libmachine: (bridge-024908) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:4a:f7", ip: ""} in network mk-bridge-024908: {Iface:virbr3 ExpiryTime:2025-09-20 00:36:56 +0000 UTC Type:0 Mac:52:54:00:6c:4a:f7 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:bridge-024908 Clientid:01:52:54:00:6c:4a:f7}
	I0919 23:37:00.782130   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined IP address 192.168.61.181 and MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:37:00.782403   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:37:00.782790   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHPort
	I0919 23:37:00.783061   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHKeyPath
	I0919 23:37:00.783226   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHUsername
	I0919 23:37:00.783351   75550 sshutil.go:53] new ssh client: &{IP:192.168.61.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21594-14764/.minikube/machines/bridge-024908/id_rsa Username:docker}
	I0919 23:37:00.785472   75550 main.go:141] libmachine: (bridge-024908) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:4a:f7", ip: ""} in network mk-bridge-024908: {Iface:virbr3 ExpiryTime:2025-09-20 00:36:56 +0000 UTC Type:0 Mac:52:54:00:6c:4a:f7 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:bridge-024908 Clientid:01:52:54:00:6c:4a:f7}
	I0919 23:37:00.785770   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined IP address 192.168.61.181 and MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:37:00.785870   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHPort
	I0919 23:37:00.786115   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHKeyPath
	I0919 23:37:00.786286   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHUsername
	I0919 23:37:00.786490   75550 sshutil.go:53] new ssh client: &{IP:192.168.61.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21594-14764/.minikube/machines/bridge-024908/id_rsa Username:docker}
	I0919 23:37:00.904049   75550 ssh_runner.go:195] Run: systemctl --version
	I0919 23:37:00.913551   75550 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 23:37:01.100048   75550 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0919 23:37:01.110465   75550 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0919 23:37:01.110531   75550 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 23:37:01.136099   75550 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0919 23:37:01.136136   75550 start.go:495] detecting cgroup driver to use...
	I0919 23:37:01.136222   75550 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 23:37:01.170382   75550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 23:37:01.196560   75550 docker.go:218] disabling cri-docker service (if available) ...
	I0919 23:37:01.196632   75550 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 23:37:01.218930   75550 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 23:37:01.237586   75550 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 23:37:01.410326   75550 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 23:37:01.652921   75550 docker.go:234] disabling docker service ...
	I0919 23:37:01.652995   75550 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 23:37:01.672795   75550 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 23:37:01.691736   75550 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 23:37:01.900102   75550 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 23:37:02.086155   75550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 23:37:02.106794   75550 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 23:37:02.134056   75550 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0919 23:37:02.134154   75550 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 23:37:02.147538   75550 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0919 23:37:02.147639   75550 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 23:37:02.161604   75550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 23:37:02.176520   75550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 23:37:02.192461   75550 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 23:37:02.208868   75550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 23:37:02.224883   75550 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 23:37:02.252496   75550 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 23:37:02.267672   75550 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 23:37:02.283159   75550 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0919 23:37:02.283231   75550 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0919 23:37:02.308362   75550 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 23:37:02.322738   75550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:37:02.512331   75550 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 23:37:02.637192   75550 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 23:37:02.637262   75550 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 23:37:02.643864   75550 start.go:563] Will wait 60s for crictl version
	I0919 23:37:02.643944   75550 ssh_runner.go:195] Run: which crictl
	I0919 23:37:02.648498   75550 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 23:37:02.700160   75550 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0919 23:37:02.700264   75550 ssh_runner.go:195] Run: crio --version
	I0919 23:37:02.734700   75550 ssh_runner.go:195] Run: crio --version
	I0919 23:37:02.773151   75550 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.29.1 ...
	I0919 23:37:02.774609   75550 main.go:141] libmachine: (bridge-024908) Calling .GetIP
	I0919 23:37:02.778039   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:37:02.778469   75550 main.go:141] libmachine: (bridge-024908) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:4a:f7", ip: ""} in network mk-bridge-024908: {Iface:virbr3 ExpiryTime:2025-09-20 00:36:56 +0000 UTC Type:0 Mac:52:54:00:6c:4a:f7 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:bridge-024908 Clientid:01:52:54:00:6c:4a:f7}
	I0919 23:37:02.778496   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined IP address 192.168.61.181 and MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:37:02.778770   75550 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0919 23:37:02.784360   75550 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 23:37:02.805839   75550 kubeadm.go:875] updating cluster {Name:bridge-024908 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.37.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:bridge-024908 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.61.181 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Di
sableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 23:37:02.805991   75550 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 23:37:02.806075   75550 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 23:37:02.850423   75550 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.0". assuming images are not preloaded.
	I0919 23:37:02.850484   75550 ssh_runner.go:195] Run: which lz4
	I0919 23:37:02.856435   75550 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0919 23:37:02.863463   75550 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0919 23:37:02.863498   75550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409455026 bytes)
	I0919 23:37:04.727721   75550 crio.go:462] duration metric: took 1.871326523s to copy over tarball
	I0919 23:37:04.727817   75550 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0919 23:37:06.744137   75550 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.016297676s)
	I0919 23:37:06.744166   75550 crio.go:469] duration metric: took 2.016394617s to extract the tarball
	I0919 23:37:06.744173   75550 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0919 23:37:06.792552   75550 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 23:37:06.851423   75550 crio.go:514] all images are preloaded for cri-o runtime.
	I0919 23:37:06.851455   75550 cache_images.go:85] Images are preloaded, skipping loading
	I0919 23:37:06.851466   75550 kubeadm.go:926] updating node { 192.168.61.181 8443 v1.34.0 crio true true} ...
	I0919 23:37:06.851590   75550 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=bridge-024908 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.181
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:bridge-024908 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0919 23:37:06.851675   75550 ssh_runner.go:195] Run: crio config
	I0919 23:37:06.913893   75550 cni.go:84] Creating CNI manager for "bridge"
	I0919 23:37:06.913930   75550 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 23:37:06.913969   75550 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.181 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-024908 NodeName:bridge-024908 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.181"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.181 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 23:37:06.914162   75550 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.181
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "bridge-024908"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.181"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.181"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 23:37:06.914242   75550 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 23:37:06.929506   75550 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 23:37:06.929625   75550 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0919 23:37:06.944823   75550 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0919 23:37:06.971622   75550 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 23:37:06.996579   75550 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I0919 23:37:07.021773   75550 ssh_runner.go:195] Run: grep 192.168.61.181	control-plane.minikube.internal$ /etc/hosts
	I0919 23:37:07.026560   75550 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.181	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 23:37:07.043397   75550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:37:07.209534   75550 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 23:37:07.232118   75550 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908 for IP: 192.168.61.181
	I0919 23:37:07.232143   75550 certs.go:194] generating shared ca certs ...
	I0919 23:37:07.232158   75550 certs.go:226] acquiring lock for ca certs: {Name:mk1fe71ea89348ba0bd576e99c774a344fba186e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:37:07.232332   75550 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14764/.minikube/ca.key
	I0919 23:37:07.232379   75550 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14764/.minikube/proxy-client-ca.key
	I0919 23:37:07.232393   75550 certs.go:256] generating profile certs ...
	I0919 23:37:07.232459   75550 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908/client.key
	I0919 23:37:07.232478   75550 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908/client.crt with IP's: []
	I0919 23:37:07.278857   75550 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908/client.crt ...
	I0919 23:37:07.278885   75550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908/client.crt: {Name:mk79ccabf3400edf55765f4a8824d93428f42fba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:37:07.279120   75550 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908/client.key ...
	I0919 23:37:07.279138   75550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908/client.key: {Name:mk662a6d2ffc59de416776a7a86f38bc8d65b0b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:37:07.279246   75550 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908/apiserver.key.e0b1cc24
	I0919 23:37:07.279267   75550 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908/apiserver.crt.e0b1cc24 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.181]
	I0919 23:37:07.581312   75550 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908/apiserver.crt.e0b1cc24 ...
	I0919 23:37:07.581341   75550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908/apiserver.crt.e0b1cc24: {Name:mkdfa65ad5651321aa3f30249330f65622547baf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:37:07.581533   75550 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908/apiserver.key.e0b1cc24 ...
	I0919 23:37:07.581548   75550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908/apiserver.key.e0b1cc24: {Name:mkdabb782e88d32fb84eaf1ac02abafa0c83f4b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:37:07.581655   75550 certs.go:381] copying /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908/apiserver.crt.e0b1cc24 -> /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908/apiserver.crt
	I0919 23:37:07.581821   75550 certs.go:385] copying /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908/apiserver.key.e0b1cc24 -> /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908/apiserver.key
	I0919 23:37:07.581920   75550 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908/proxy-client.key
	I0919 23:37:07.581944   75550 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908/proxy-client.crt with IP's: []
	I0919 23:37:08.000959   75550 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908/proxy-client.crt ...
	I0919 23:37:08.000986   75550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908/proxy-client.crt: {Name:mkf85afd35073317efa4a6b19e23641c7a331aff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:37:08.001170   75550 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908/proxy-client.key ...
	I0919 23:37:08.001184   75550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908/proxy-client.key: {Name:mk3f2c425a07ff4d1f574e71c87ee48f134d63bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:37:08.001382   75550 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14764/.minikube/certs/18671.pem (1338 bytes)
	W0919 23:37:08.001415   75550 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14764/.minikube/certs/18671_empty.pem, impossibly tiny 0 bytes
	I0919 23:37:08.001425   75550 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14764/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 23:37:08.001445   75550 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14764/.minikube/certs/ca.pem (1082 bytes)
	I0919 23:37:08.001471   75550 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14764/.minikube/certs/cert.pem (1123 bytes)
	I0919 23:37:08.001492   75550 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14764/.minikube/certs/key.pem (1679 bytes)
	I0919 23:37:08.001529   75550 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14764/.minikube/files/etc/ssl/certs/186712.pem (1708 bytes)
	I0919 23:37:08.002199   75550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14764/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 23:37:08.059820   75550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14764/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0919 23:37:08.113479   75550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14764/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 23:37:08.151056   75550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14764/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 23:37:08.188598   75550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0919 23:37:08.222703   75550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 23:37:08.260566   75550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 23:37:08.296326   75550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0919 23:37:08.340647   75550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14764/.minikube/files/etc/ssl/certs/186712.pem --> /usr/share/ca-certificates/186712.pem (1708 bytes)
	I0919 23:37:08.378528   75550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14764/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 23:37:08.423651   75550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14764/.minikube/certs/18671.pem --> /usr/share/ca-certificates/18671.pem (1338 bytes)
	I0919 23:37:08.467492   75550 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 23:37:08.495077   75550 ssh_runner.go:195] Run: openssl version
	I0919 23:37:08.502873   75550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 23:37:08.519331   75550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:37:08.527531   75550 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:14 /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:37:08.527600   75550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:37:08.538171   75550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 23:37:08.554573   75550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18671.pem && ln -fs /usr/share/ca-certificates/18671.pem /etc/ssl/certs/18671.pem"
	I0919 23:37:08.571856   75550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18671.pem
	I0919 23:37:08.578662   75550 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:22 /usr/share/ca-certificates/18671.pem
	I0919 23:37:08.578735   75550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18671.pem
	I0919 23:37:08.587689   75550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18671.pem /etc/ssl/certs/51391683.0"
	I0919 23:37:08.603323   75550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/186712.pem && ln -fs /usr/share/ca-certificates/186712.pem /etc/ssl/certs/186712.pem"
	I0919 23:37:08.619563   75550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/186712.pem
	I0919 23:37:08.625749   75550 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:22 /usr/share/ca-certificates/186712.pem
	I0919 23:37:08.625815   75550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/186712.pem
	I0919 23:37:08.634002   75550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/186712.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 23:37:08.650009   75550 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 23:37:08.655522   75550 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 23:37:08.655576   75550 kubeadm.go:392] StartCluster: {Name:bridge-024908 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.37.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:bridge-024908 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.61.181 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 23:37:08.655638   75550 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0919 23:37:08.655689   75550 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 23:37:08.698969   75550 cri.go:89] found id: ""
	I0919 23:37:08.699041   75550 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 23:37:08.716698   75550 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 23:37:08.731322   75550 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 23:37:08.751983   75550 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 23:37:08.752002   75550 kubeadm.go:157] found existing configuration files:
	
	I0919 23:37:08.752058   75550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0919 23:37:08.767032   75550 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0919 23:37:08.767090   75550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0919 23:37:08.783316   75550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0919 23:37:08.795760   75550 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0919 23:37:08.795814   75550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0919 23:37:08.809265   75550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0919 23:37:08.822265   75550 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0919 23:37:08.822326   75550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0919 23:37:08.836477   75550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0919 23:37:08.848426   75550 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0919 23:37:08.848515   75550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0919 23:37:08.862265   75550 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0919 23:37:08.922610   75550 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0919 23:37:08.922696   75550 kubeadm.go:310] [preflight] Running pre-flight checks
	I0919 23:37:09.034699   75550 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 23:37:09.034883   75550 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 23:37:09.035086   75550 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0919 23:37:09.047666   75550 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 23:37:09.321823   75550 out.go:252]   - Generating certificates and keys ...
	I0919 23:37:09.321968   75550 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0919 23:37:09.322073   75550 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0919 23:37:09.322184   75550 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0919 23:37:09.347963   75550 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0919 23:37:09.482372   75550 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0919 23:37:09.853248   75550 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0919 23:37:10.322469   75550 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0919 23:37:10.322660   75550 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [bridge-024908 localhost] and IPs [192.168.61.181 127.0.0.1 ::1]
	I0919 23:37:10.527309   75550 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0919 23:37:10.527480   75550 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [bridge-024908 localhost] and IPs [192.168.61.181 127.0.0.1 ::1]
	I0919 23:37:10.753826   75550 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0919 23:37:11.368432   75550 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0919 23:37:11.755436   75550 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0919 23:37:11.755668   75550 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 23:37:12.305982   75550 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 23:37:12.696979   75550 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0919 23:37:12.850075   75550 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 23:37:13.115083   75550 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 23:37:13.231334   75550 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 23:37:13.232026   75550 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 23:37:13.234652   75550 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 23:37:13.303712   75550 out.go:252]   - Booting up control plane ...
	I0919 23:37:13.303896   75550 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 23:37:13.304014   75550 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 23:37:13.304160   75550 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 23:37:13.304311   75550 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 23:37:13.304450   75550 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0919 23:37:13.304603   75550 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0919 23:37:13.304756   75550 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 23:37:13.304824   75550 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0919 23:37:13.487400   75550 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0919 23:37:13.487533   75550 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0919 23:37:14.491714   75550 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.003425478s
	I0919 23:37:14.495747   75550 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0919 23:37:14.495873   75550 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.61.181:8443/livez
	I0919 23:37:14.496903   75550 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0919 23:37:14.497032   75550 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0919 23:37:18.996516   75550 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 4.501987206s
	I0919 23:37:19.111283   75550 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 4.617352883s
	I0919 23:37:21.494958   75550 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 7.001624831s
	I0919 23:37:21.517641   75550 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 23:37:21.532079   75550 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 23:37:21.551635   75550 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 23:37:21.551915   75550 kubeadm.go:310] [mark-control-plane] Marking the node bridge-024908 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 23:37:21.573788   75550 kubeadm.go:310] [bootstrap-token] Using token: oan5k4.y35iylaimwyxz31p
	I0919 23:37:21.575410   75550 out.go:252]   - Configuring RBAC rules ...
	I0919 23:37:21.575576   75550 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 23:37:21.586494   75550 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 23:37:21.598851   75550 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 23:37:21.603078   75550 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 23:37:21.608646   75550 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 23:37:21.613569   75550 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 23:37:21.904188   75550 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 23:37:22.392583   75550 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0919 23:37:22.905202   75550 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0919 23:37:22.907292   75550 kubeadm.go:310] 
	I0919 23:37:22.907390   75550 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0919 23:37:22.907401   75550 kubeadm.go:310] 
	I0919 23:37:22.907498   75550 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0919 23:37:22.907507   75550 kubeadm.go:310] 
	I0919 23:37:22.907580   75550 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0919 23:37:22.907702   75550 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 23:37:22.907802   75550 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 23:37:22.907816   75550 kubeadm.go:310] 
	I0919 23:37:22.907916   75550 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0919 23:37:22.907930   75550 kubeadm.go:310] 
	I0919 23:37:22.907997   75550 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 23:37:22.908008   75550 kubeadm.go:310] 
	I0919 23:37:22.908080   75550 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0919 23:37:22.908200   75550 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 23:37:22.908319   75550 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 23:37:22.908332   75550 kubeadm.go:310] 
	I0919 23:37:22.908458   75550 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 23:37:22.908577   75550 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0919 23:37:22.908594   75550 kubeadm.go:310] 
	I0919 23:37:22.908705   75550 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token oan5k4.y35iylaimwyxz31p \
	I0919 23:37:22.908853   75550 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:764767ee84c2df0ad4ae14ef93303d4368042da5603c686ffbd3dbfd5d1666a5 \
	I0919 23:37:22.908884   75550 kubeadm.go:310] 	--control-plane 
	I0919 23:37:22.908891   75550 kubeadm.go:310] 
	I0919 23:37:22.909004   75550 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0919 23:37:22.909023   75550 kubeadm.go:310] 
	I0919 23:37:22.909139   75550 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token oan5k4.y35iylaimwyxz31p \
	I0919 23:37:22.909286   75550 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:764767ee84c2df0ad4ae14ef93303d4368042da5603c686ffbd3dbfd5d1666a5 
	I0919 23:37:22.914281   75550 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 23:37:22.914320   75550 cni.go:84] Creating CNI manager for "bridge"
	I0919 23:37:22.917094   75550 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I0919 23:37:22.918525   75550 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0919 23:37:22.935377   75550 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0919 23:37:22.966357   75550 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 23:37:22.966416   75550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:37:22.966464   75550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-024908 minikube.k8s.io/updated_at=2025_09_19T23_37_22_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=bridge-024908 minikube.k8s.io/primary=true
	I0919 23:37:23.203789   75550 ops.go:34] apiserver oom_adj: -16
	I0919 23:37:23.203866   75550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:37:23.704828   75550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:37:24.204397   75550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:37:24.704905   75550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:37:25.204337   75550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:37:25.704067   75550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:37:26.204817   75550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:37:26.703983   75550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:37:27.204824   75550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:37:27.704200   75550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:37:27.797627   75550 kubeadm.go:1105] duration metric: took 4.831273881s to wait for elevateKubeSystemPrivileges
	I0919 23:37:27.797666   75550 kubeadm.go:394] duration metric: took 19.142094171s to StartCluster
	I0919 23:37:27.797683   75550 settings.go:142] acquiring lock: {Name:mk9e6bfe60e4d22990b0b362d40b65315947b582 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:37:27.797765   75550 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21594-14764/kubeconfig
	I0919 23:37:27.798983   75550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14764/kubeconfig: {Name:mk29db95201211dec339ee278b6433541126d194 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:37:27.799265   75550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 23:37:27.799323   75550 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.61.181 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 23:37:27.799517   75550 config.go:182] Loaded profile config "bridge-024908": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 23:37:27.799481   75550 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0919 23:37:27.799571   75550 addons.go:69] Setting storage-provisioner=true in profile "bridge-024908"
	I0919 23:37:27.799605   75550 addons.go:69] Setting default-storageclass=true in profile "bridge-024908"
	I0919 23:37:27.799629   75550 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "bridge-024908"
	I0919 23:37:27.799659   75550 addons.go:238] Setting addon storage-provisioner=true in "bridge-024908"
	I0919 23:37:27.799698   75550 host.go:66] Checking if "bridge-024908" exists ...
	I0919 23:37:27.800067   75550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 23:37:27.800089   75550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 23:37:27.800103   75550 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 23:37:27.800128   75550 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 23:37:27.804391   75550 out.go:179] * Verifying Kubernetes components...
	I0919 23:37:27.806295   75550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:37:27.815312   75550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32923
	I0919 23:37:27.815831   75550 main.go:141] libmachine: () Calling .GetVersion
	I0919 23:37:27.816354   75550 main.go:141] libmachine: Using API Version  1
	I0919 23:37:27.816382   75550 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 23:37:27.816817   75550 main.go:141] libmachine: () Calling .GetMachineName
	I0919 23:37:27.817042   75550 main.go:141] libmachine: (bridge-024908) Calling .GetState
	I0919 23:37:27.817944   75550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46285
	I0919 23:37:27.818342   75550 main.go:141] libmachine: () Calling .GetVersion
	I0919 23:37:27.818860   75550 main.go:141] libmachine: Using API Version  1
	I0919 23:37:27.818885   75550 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 23:37:27.819383   75550 main.go:141] libmachine: () Calling .GetMachineName
	I0919 23:37:27.820077   75550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 23:37:27.820125   75550 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 23:37:27.820715   75550 addons.go:238] Setting addon default-storageclass=true in "bridge-024908"
	I0919 23:37:27.820768   75550 host.go:66] Checking if "bridge-024908" exists ...
	I0919 23:37:27.821068   75550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 23:37:27.821113   75550 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 23:37:27.834971   75550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45933
	I0919 23:37:27.835369   75550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43609
	I0919 23:37:27.835635   75550 main.go:141] libmachine: () Calling .GetVersion
	I0919 23:37:27.835973   75550 main.go:141] libmachine: () Calling .GetVersion
	I0919 23:37:27.836163   75550 main.go:141] libmachine: Using API Version  1
	I0919 23:37:27.836184   75550 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 23:37:27.836504   75550 main.go:141] libmachine: Using API Version  1
	I0919 23:37:27.836528   75550 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 23:37:27.836599   75550 main.go:141] libmachine: () Calling .GetMachineName
	I0919 23:37:27.836825   75550 main.go:141] libmachine: (bridge-024908) Calling .GetState
	I0919 23:37:27.837003   75550 main.go:141] libmachine: () Calling .GetMachineName
	I0919 23:37:27.837582   75550 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 23:37:27.837629   75550 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 23:37:27.839286   75550 main.go:141] libmachine: (bridge-024908) Calling .DriverName
	I0919 23:37:27.844832   75550 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 23:37:27.846566   75550 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 23:37:27.846593   75550 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 23:37:27.846624   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHHostname
	I0919 23:37:27.851603   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:37:27.852222   75550 main.go:141] libmachine: (bridge-024908) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:4a:f7", ip: ""} in network mk-bridge-024908: {Iface:virbr3 ExpiryTime:2025-09-20 00:36:56 +0000 UTC Type:0 Mac:52:54:00:6c:4a:f7 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:bridge-024908 Clientid:01:52:54:00:6c:4a:f7}
	I0919 23:37:27.852251   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined IP address 192.168.61.181 and MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:37:27.852551   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHPort
	I0919 23:37:27.852885   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHKeyPath
	I0919 23:37:27.853113   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHUsername
	I0919 23:37:27.853322   75550 sshutil.go:53] new ssh client: &{IP:192.168.61.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21594-14764/.minikube/machines/bridge-024908/id_rsa Username:docker}
	I0919 23:37:27.855262   75550 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36445
	I0919 23:37:27.855675   75550 main.go:141] libmachine: () Calling .GetVersion
	I0919 23:37:27.856191   75550 main.go:141] libmachine: Using API Version  1
	I0919 23:37:27.856215   75550 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 23:37:27.856635   75550 main.go:141] libmachine: () Calling .GetMachineName
	I0919 23:37:27.856897   75550 main.go:141] libmachine: (bridge-024908) Calling .GetState
	I0919 23:37:27.859183   75550 main.go:141] libmachine: (bridge-024908) Calling .DriverName
	I0919 23:37:27.859455   75550 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 23:37:27.859488   75550 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 23:37:27.859511   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHHostname
	I0919 23:37:27.863208   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:37:27.863776   75550 main.go:141] libmachine: (bridge-024908) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:4a:f7", ip: ""} in network mk-bridge-024908: {Iface:virbr3 ExpiryTime:2025-09-20 00:36:56 +0000 UTC Type:0 Mac:52:54:00:6c:4a:f7 Iaid: IPaddr:192.168.61.181 Prefix:24 Hostname:bridge-024908 Clientid:01:52:54:00:6c:4a:f7}
	I0919 23:37:27.863807   75550 main.go:141] libmachine: (bridge-024908) DBG | domain bridge-024908 has defined IP address 192.168.61.181 and MAC address 52:54:00:6c:4a:f7 in network mk-bridge-024908
	I0919 23:37:27.864034   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHPort
	I0919 23:37:27.864279   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHKeyPath
	I0919 23:37:27.864464   75550 main.go:141] libmachine: (bridge-024908) Calling .GetSSHUsername
	I0919 23:37:27.864643   75550 sshutil.go:53] new ssh client: &{IP:192.168.61.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21594-14764/.minikube/machines/bridge-024908/id_rsa Username:docker}
	I0919 23:37:28.087700   75550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0919 23:37:28.113531   75550 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 23:37:28.426265   75550 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 23:37:28.431992   75550 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 23:37:28.884920   75550 start.go:976] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0919 23:37:28.886130   75550 node_ready.go:35] waiting up to 15m0s for node "bridge-024908" to be "Ready" ...
	I0919 23:37:28.906015   75550 node_ready.go:49] node "bridge-024908" is "Ready"
	I0919 23:37:28.906049   75550 node_ready.go:38] duration metric: took 19.859764ms for node "bridge-024908" to be "Ready" ...
	I0919 23:37:28.906066   75550 api_server.go:52] waiting for apiserver process to appear ...
	I0919 23:37:28.906123   75550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 23:37:29.391916   75550 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-024908" context rescaled to 1 replicas
	I0919 23:37:29.498850   75550 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.072552362s)
	I0919 23:37:29.498904   75550 main.go:141] libmachine: Making call to close driver server
	I0919 23:37:29.498915   75550 main.go:141] libmachine: (bridge-024908) Calling .Close
	I0919 23:37:29.498918   75550 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.066894416s)
	I0919 23:37:29.498957   75550 api_server.go:72] duration metric: took 1.699601344s to wait for apiserver process to appear ...
	I0919 23:37:29.498993   75550 api_server.go:88] waiting for apiserver healthz status ...
	I0919 23:37:29.499014   75550 api_server.go:253] Checking apiserver healthz at https://192.168.61.181:8443/healthz ...
	I0919 23:37:29.498961   75550 main.go:141] libmachine: Making call to close driver server
	I0919 23:37:29.499172   75550 main.go:141] libmachine: (bridge-024908) Calling .Close
	I0919 23:37:29.499245   75550 main.go:141] libmachine: Successfully made call to close driver server
	I0919 23:37:29.499260   75550 main.go:141] libmachine: (bridge-024908) DBG | Closing plugin on server side
	I0919 23:37:29.499268   75550 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 23:37:29.499277   75550 main.go:141] libmachine: Making call to close driver server
	I0919 23:37:29.499284   75550 main.go:141] libmachine: (bridge-024908) Calling .Close
	I0919 23:37:29.499484   75550 main.go:141] libmachine: Successfully made call to close driver server
	I0919 23:37:29.499489   75550 main.go:141] libmachine: (bridge-024908) DBG | Closing plugin on server side
	I0919 23:37:29.499500   75550 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 23:37:29.499508   75550 main.go:141] libmachine: Making call to close driver server
	I0919 23:37:29.499516   75550 main.go:141] libmachine: (bridge-024908) Calling .Close
	I0919 23:37:29.499517   75550 main.go:141] libmachine: (bridge-024908) DBG | Closing plugin on server side
	I0919 23:37:29.499541   75550 main.go:141] libmachine: Successfully made call to close driver server
	I0919 23:37:29.499552   75550 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 23:37:29.499784   75550 main.go:141] libmachine: Successfully made call to close driver server
	I0919 23:37:29.499798   75550 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 23:37:29.548296   75550 api_server.go:279] https://192.168.61.181:8443/healthz returned 200:
	ok
	I0919 23:37:29.549453   75550 main.go:141] libmachine: Making call to close driver server
	I0919 23:37:29.549472   75550 main.go:141] libmachine: (bridge-024908) Calling .Close
	I0919 23:37:29.549843   75550 main.go:141] libmachine: Successfully made call to close driver server
	I0919 23:37:29.549888   75550 main.go:141] libmachine: (bridge-024908) DBG | Closing plugin on server side
	I0919 23:37:29.549904   75550 main.go:141] libmachine: Making call to close connection to plugin binary
	I0919 23:37:29.551407   75550 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0919 23:37:29.552566   75550 addons.go:514] duration metric: took 1.753093123s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0919 23:37:29.552812   75550 api_server.go:141] control plane version: v1.34.0
	I0919 23:37:29.552841   75550 api_server.go:131] duration metric: took 53.840085ms to wait for apiserver health ...
	I0919 23:37:29.552851   75550 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 23:37:29.574521   75550 system_pods.go:59] 8 kube-system pods found
	I0919 23:37:29.574560   75550 system_pods.go:61] "coredns-66bc5c9577-f6f8d" [d1e7873e-6875-4bbe-8193-28c6bf3b050e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:37:29.574569   75550 system_pods.go:61] "coredns-66bc5c9577-nnctc" [1e4e50ce-029c-40bd-a5fe-cf3811bd00db] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:37:29.574574   75550 system_pods.go:61] "etcd-bridge-024908" [5538ee2d-8602-442d-a181-ce78f7bd9108] Running
	I0919 23:37:29.574582   75550 system_pods.go:61] "kube-apiserver-bridge-024908" [05626e88-3d56-41a5-b965-3b2c86e50618] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 23:37:29.574589   75550 system_pods.go:61] "kube-controller-manager-bridge-024908" [ad57c2f7-909c-4182-98f8-c1bf467a572a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 23:37:29.574631   75550 system_pods.go:61] "kube-proxy-vswk4" [abc89b80-f24a-4b1d-9553-6c821e379d81] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:37:29.574637   75550 system_pods.go:61] "kube-scheduler-bridge-024908" [7d4c1538-dd14-4373-85b5-15dfcdaea017] Running
	I0919 23:37:29.574644   75550 system_pods.go:61] "storage-provisioner" [811ae028-d815-429e-9b89-15d35c74844d] Pending
	I0919 23:37:29.574650   75550 system_pods.go:74] duration metric: took 21.793524ms to wait for pod list to return data ...
	I0919 23:37:29.574664   75550 default_sa.go:34] waiting for default service account to be created ...
	I0919 23:37:29.606010   75550 default_sa.go:45] found service account: "default"
	I0919 23:37:29.606041   75550 default_sa.go:55] duration metric: took 31.369318ms for default service account to be created ...
	I0919 23:37:29.606052   75550 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 23:37:29.633174   75550 system_pods.go:86] 8 kube-system pods found
	I0919 23:37:29.633211   75550 system_pods.go:89] "coredns-66bc5c9577-f6f8d" [d1e7873e-6875-4bbe-8193-28c6bf3b050e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:37:29.633222   75550 system_pods.go:89] "coredns-66bc5c9577-nnctc" [1e4e50ce-029c-40bd-a5fe-cf3811bd00db] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:37:29.633232   75550 system_pods.go:89] "etcd-bridge-024908" [5538ee2d-8602-442d-a181-ce78f7bd9108] Running
	I0919 23:37:29.633241   75550 system_pods.go:89] "kube-apiserver-bridge-024908" [05626e88-3d56-41a5-b965-3b2c86e50618] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 23:37:29.633250   75550 system_pods.go:89] "kube-controller-manager-bridge-024908" [ad57c2f7-909c-4182-98f8-c1bf467a572a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 23:37:29.633260   75550 system_pods.go:89] "kube-proxy-vswk4" [abc89b80-f24a-4b1d-9553-6c821e379d81] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 23:37:29.633269   75550 system_pods.go:89] "kube-scheduler-bridge-024908" [7d4c1538-dd14-4373-85b5-15dfcdaea017] Running
	I0919 23:37:29.633278   75550 system_pods.go:89] "storage-provisioner" [811ae028-d815-429e-9b89-15d35c74844d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 23:37:29.633306   75550 retry.go:31] will retry after 283.461542ms: missing components: kube-dns, kube-proxy
	I0919 23:37:29.924680   75550 system_pods.go:86] 8 kube-system pods found
	I0919 23:37:29.924717   75550 system_pods.go:89] "coredns-66bc5c9577-f6f8d" [d1e7873e-6875-4bbe-8193-28c6bf3b050e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:37:29.924740   75550 system_pods.go:89] "coredns-66bc5c9577-nnctc" [1e4e50ce-029c-40bd-a5fe-cf3811bd00db] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:37:29.924748   75550 system_pods.go:89] "etcd-bridge-024908" [5538ee2d-8602-442d-a181-ce78f7bd9108] Running
	I0919 23:37:29.924757   75550 system_pods.go:89] "kube-apiserver-bridge-024908" [05626e88-3d56-41a5-b965-3b2c86e50618] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 23:37:29.924766   75550 system_pods.go:89] "kube-controller-manager-bridge-024908" [ad57c2f7-909c-4182-98f8-c1bf467a572a] Running
	I0919 23:37:29.924775   75550 system_pods.go:89] "kube-proxy-vswk4" [abc89b80-f24a-4b1d-9553-6c821e379d81] Running
	I0919 23:37:29.924780   75550 system_pods.go:89] "kube-scheduler-bridge-024908" [7d4c1538-dd14-4373-85b5-15dfcdaea017] Running
	I0919 23:37:29.924788   75550 system_pods.go:89] "storage-provisioner" [811ae028-d815-429e-9b89-15d35c74844d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 23:37:29.924809   75550 retry.go:31] will retry after 331.047278ms: missing components: kube-dns
	I0919 23:37:30.260743   75550 system_pods.go:86] 8 kube-system pods found
	I0919 23:37:30.260779   75550 system_pods.go:89] "coredns-66bc5c9577-f6f8d" [d1e7873e-6875-4bbe-8193-28c6bf3b050e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:37:30.260786   75550 system_pods.go:89] "coredns-66bc5c9577-nnctc" [1e4e50ce-029c-40bd-a5fe-cf3811bd00db] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:37:30.260793   75550 system_pods.go:89] "etcd-bridge-024908" [5538ee2d-8602-442d-a181-ce78f7bd9108] Running
	I0919 23:37:30.260800   75550 system_pods.go:89] "kube-apiserver-bridge-024908" [05626e88-3d56-41a5-b965-3b2c86e50618] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 23:37:30.260804   75550 system_pods.go:89] "kube-controller-manager-bridge-024908" [ad57c2f7-909c-4182-98f8-c1bf467a572a] Running
	I0919 23:37:30.260807   75550 system_pods.go:89] "kube-proxy-vswk4" [abc89b80-f24a-4b1d-9553-6c821e379d81] Running
	I0919 23:37:30.260811   75550 system_pods.go:89] "kube-scheduler-bridge-024908" [7d4c1538-dd14-4373-85b5-15dfcdaea017] Running
	I0919 23:37:30.260815   75550 system_pods.go:89] "storage-provisioner" [811ae028-d815-429e-9b89-15d35c74844d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 23:37:30.260828   75550 retry.go:31] will retry after 445.277292ms: missing components: kube-dns
	I0919 23:37:30.712500   75550 system_pods.go:86] 8 kube-system pods found
	I0919 23:37:30.712531   75550 system_pods.go:89] "coredns-66bc5c9577-f6f8d" [d1e7873e-6875-4bbe-8193-28c6bf3b050e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:37:30.712542   75550 system_pods.go:89] "coredns-66bc5c9577-nnctc" [1e4e50ce-029c-40bd-a5fe-cf3811bd00db] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:37:30.712548   75550 system_pods.go:89] "etcd-bridge-024908" [5538ee2d-8602-442d-a181-ce78f7bd9108] Running
	I0919 23:37:30.712558   75550 system_pods.go:89] "kube-apiserver-bridge-024908" [05626e88-3d56-41a5-b965-3b2c86e50618] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 23:37:30.712562   75550 system_pods.go:89] "kube-controller-manager-bridge-024908" [ad57c2f7-909c-4182-98f8-c1bf467a572a] Running
	I0919 23:37:30.712566   75550 system_pods.go:89] "kube-proxy-vswk4" [abc89b80-f24a-4b1d-9553-6c821e379d81] Running
	I0919 23:37:30.712569   75550 system_pods.go:89] "kube-scheduler-bridge-024908" [7d4c1538-dd14-4373-85b5-15dfcdaea017] Running
	I0919 23:37:30.712572   75550 system_pods.go:89] "storage-provisioner" [811ae028-d815-429e-9b89-15d35c74844d] Running
	I0919 23:37:30.712579   75550 system_pods.go:126] duration metric: took 1.106521028s to wait for k8s-apps to be running ...
	I0919 23:37:30.712585   75550 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 23:37:30.712643   75550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 23:37:30.732487   75550 system_svc.go:56] duration metric: took 19.893849ms WaitForService to wait for kubelet
	I0919 23:37:30.732518   75550 kubeadm.go:578] duration metric: took 2.933164933s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 23:37:30.732534   75550 node_conditions.go:102] verifying NodePressure condition ...
	I0919 23:37:30.736021   75550 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0919 23:37:30.736057   75550 node_conditions.go:123] node cpu capacity is 2
	I0919 23:37:30.736074   75550 node_conditions.go:105] duration metric: took 3.536029ms to run NodePressure ...
	I0919 23:37:30.736084   75550 start.go:241] waiting for startup goroutines ...
	I0919 23:37:30.736093   75550 start.go:246] waiting for cluster config update ...
	I0919 23:37:30.736106   75550 start.go:255] writing updated cluster config ...
	I0919 23:37:30.736425   75550 ssh_runner.go:195] Run: rm -f paused
	I0919 23:37:30.742435   75550 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 23:37:30.747825   75550 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-f6f8d" in "kube-system" namespace to be "Ready" or be gone ...
	W0919 23:37:32.755467   75550 pod_ready.go:104] pod "coredns-66bc5c9577-f6f8d" is not "Ready", error: <nil>
	W0919 23:37:35.254202   75550 pod_ready.go:104] pod "coredns-66bc5c9577-f6f8d" is not "Ready", error: <nil>
	W0919 23:37:37.255433   75550 pod_ready.go:104] pod "coredns-66bc5c9577-f6f8d" is not "Ready", error: <nil>
	W0919 23:37:39.256877   75550 pod_ready.go:104] pod "coredns-66bc5c9577-f6f8d" is not "Ready", error: <nil>
	I0919 23:37:40.751573   75550 pod_ready.go:99] pod "coredns-66bc5c9577-f6f8d" in "kube-system" namespace is gone: getting pod "coredns-66bc5c9577-f6f8d" in "kube-system" namespace (will retry): pods "coredns-66bc5c9577-f6f8d" not found
	I0919 23:37:40.751605   75550 pod_ready.go:86] duration metric: took 10.00375082s for pod "coredns-66bc5c9577-f6f8d" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:37:40.751644   75550 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-nnctc" in "kube-system" namespace to be "Ready" or be gone ...
	W0919 23:37:42.757893   75550 pod_ready.go:104] pod "coredns-66bc5c9577-nnctc" is not "Ready", error: <nil>
	W0919 23:37:45.257639   75550 pod_ready.go:104] pod "coredns-66bc5c9577-nnctc" is not "Ready", error: <nil>
	W0919 23:37:47.258820   75550 pod_ready.go:104] pod "coredns-66bc5c9577-nnctc" is not "Ready", error: <nil>
	W0919 23:37:49.260985   75550 pod_ready.go:104] pod "coredns-66bc5c9577-nnctc" is not "Ready", error: <nil>
	W0919 23:37:51.758752   75550 pod_ready.go:104] pod "coredns-66bc5c9577-nnctc" is not "Ready", error: <nil>
	W0919 23:37:53.758976   75550 pod_ready.go:104] pod "coredns-66bc5c9577-nnctc" is not "Ready", error: <nil>
	W0919 23:37:56.258048   75550 pod_ready.go:104] pod "coredns-66bc5c9577-nnctc" is not "Ready", error: <nil>
	W0919 23:37:58.258337   75550 pod_ready.go:104] pod "coredns-66bc5c9577-nnctc" is not "Ready", error: <nil>
	W0919 23:38:00.764113   75550 pod_ready.go:104] pod "coredns-66bc5c9577-nnctc" is not "Ready", error: <nil>
	W0919 23:38:03.259818   75550 pod_ready.go:104] pod "coredns-66bc5c9577-nnctc" is not "Ready", error: <nil>
	I0919 23:38:04.757900   75550 pod_ready.go:94] pod "coredns-66bc5c9577-nnctc" is "Ready"
	I0919 23:38:04.757930   75550 pod_ready.go:86] duration metric: took 24.006278671s for pod "coredns-66bc5c9577-nnctc" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:38:04.761252   75550 pod_ready.go:83] waiting for pod "etcd-bridge-024908" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:38:04.766557   75550 pod_ready.go:94] pod "etcd-bridge-024908" is "Ready"
	I0919 23:38:04.766585   75550 pod_ready.go:86] duration metric: took 5.299603ms for pod "etcd-bridge-024908" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:38:04.769274   75550 pod_ready.go:83] waiting for pod "kube-apiserver-bridge-024908" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:38:04.775608   75550 pod_ready.go:94] pod "kube-apiserver-bridge-024908" is "Ready"
	I0919 23:38:04.775639   75550 pod_ready.go:86] duration metric: took 6.336927ms for pod "kube-apiserver-bridge-024908" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:38:04.778115   75550 pod_ready.go:83] waiting for pod "kube-controller-manager-bridge-024908" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:38:04.956080   75550 pod_ready.go:94] pod "kube-controller-manager-bridge-024908" is "Ready"
	I0919 23:38:04.956114   75550 pod_ready.go:86] duration metric: took 177.969622ms for pod "kube-controller-manager-bridge-024908" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:38:05.156159   75550 pod_ready.go:83] waiting for pod "kube-proxy-vswk4" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:38:05.556114   75550 pod_ready.go:94] pod "kube-proxy-vswk4" is "Ready"
	I0919 23:38:05.556141   75550 pod_ready.go:86] duration metric: took 399.958599ms for pod "kube-proxy-vswk4" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:38:05.756089   75550 pod_ready.go:83] waiting for pod "kube-scheduler-bridge-024908" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:38:06.155987   75550 pod_ready.go:94] pod "kube-scheduler-bridge-024908" is "Ready"
	I0919 23:38:06.156017   75550 pod_ready.go:86] duration metric: took 399.900092ms for pod "kube-scheduler-bridge-024908" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:38:06.156030   75550 pod_ready.go:40] duration metric: took 35.413562399s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 23:38:06.202255   75550 start.go:617] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0919 23:38:06.204902   75550 out.go:179] * Done! kubectl is now configured to use "bridge-024908" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 19 23:54:50 default-k8s-diff-port-304197 crio[882]: time="2025-09-19 23:54:50.435238331Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758326090435209208,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176956,},InodesUsed:&UInt64Value{Value:65,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6d69e308-a701-4d4d-a4f6-f888bfa5d55a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 23:54:50 default-k8s-diff-port-304197 crio[882]: time="2025-09-19 23:54:50.436709010Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=48447340-67df-48da-bca4-ac48d8fc7536 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 23:54:50 default-k8s-diff-port-304197 crio[882]: time="2025-09-19 23:54:50.436783117Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=48447340-67df-48da-bca4-ac48d8fc7536 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 23:54:50 default-k8s-diff-port-304197 crio[882]: time="2025-09-19 23:54:50.437086611Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0612c7330c35c048f4c63d0dfea394321ccf6c624791adbf22742280a6e79187,PodSandboxId:7dc46f2e1c62053024540103a673c22fa46949c4bd447d6861b210bb30b60eff,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1758325979633593745,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-6ffb444bf9-mwz54,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 911a6b3c-3441-40a6-ac54-04cf424c179b,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.k
ubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ec2522d96a5ed9e24e9ecf32e0d0c252795dcd9c98c44a5b6a16b9ea4a1a9e2,PodSandboxId:206391a3986c3bb63b39cf935fc260351d4023074f7b7aba696e1594757a1cef,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1758325038465690059,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 874b675f-ecbe-4052-a6fb-bc7a6028db03,},Annotations
:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:375a7d2fd8f93e60f9c3abc0d93eabc7ac7f390406b748cd91b0f70fa45e969d,PodSandboxId:1b559b4a4e82ced80c6bfe6ad4227f9e84668422a12a82317372e8b4f4c11dab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1758325025436851219,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3321717a-b901-415e-b199-977471c0ff1f,},Annotations:map[string]
string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34682f7775d4f0930f8412d24f895144c1151ebff6abf2b263cad9f204138b14,PodSandboxId:1de7d351999a23a7f5726463eb34fd9c478fec111f541435517cd97a2cc3a5a5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1758325001966050992,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-qxgj9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6340754-da46-4e31-9f54-feec6a797beb,},Annotations:map[string]string{io.kubernetes.container.
hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74a89ff1d5c44294b0a6b690b20f0b8f8fc88e97e56917b2944fb036c040abdb,PodSandboxId:da97ab890a843eb13424019f4a031b251cba4d13263e684590ede4a3203ac1b7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b
97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1758324994368240805,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hr2bk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02b8b6af-3927-4e0c-a567-28aca5e8cd79,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3436dea0618688158ab7ae5e858bc14d034997b2f4f38b97b32b4274515a49ce,PodSandboxId:1b559b4a4e82ced80c6bfe6ad4227f9e84668422a12a82317372e8b4f4c11dab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_EXITED,CreatedAt:1758324994446061048,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3321717a-b901-415e-b199-977471c0ff1f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e5e899743d432ddcb109979a1ab6e43a88582b10f19fce066e46d5777c20fd2,PodSandboxId:f773b827713b4e5ececda93e3a2c843c58d602181aa64c2ea25ab84b0029e3eb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTA
INER_RUNNING,CreatedAt:1758324988850025144,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-304197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b8cf1138d801c35b4e4cb07a863160e,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b446ba7b5e7f19fe9fc931d0c6005d9015217d26588293b0f5ebfba7a46b9f14,PodSandboxId:6a793c0aa0569cdec519d3e4db356bf00a6f008c3d4825fc63c467b421b96247,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[
string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1758324988840674551,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-304197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d5c2a791d209fbb8e019f27ce69c24a,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c7b58f5c9dbb6070a2649d925cf0f5e747933442ed0c1077d4e04d65cde0aa5,PodSandboxId:ee9c4e628431cbb4790bcad695abc86d7bad9b9d0133901d3f8e7af771bf2b5b,Metadata:&ContainerMetadata{Name:etcd,Attem
pt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1758324988805933722,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-304197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a53b848b476e69d25c1f04609257642b,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:599395f360461d56196f5088ecd3da31a52b32cd771d2c1745f745d0a8515293,PodSandboxId:68cb9daeb17d4acdf5a
36f6b60a2967df65ea3cc7501a6efe96b06776ecd4bbb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1758324988741030233,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-304197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aaf7793ef540d751420bc805bb28b292,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8444,\"containerPort\":8444,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod:
30,},},},}" file="otel-collector/interceptors.go:74" id=48447340-67df-48da-bca4-ac48d8fc7536 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 23:54:50 default-k8s-diff-port-304197 crio[882]: time="2025-09-19 23:54:50.485239312Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f0ba119f-655d-44e6-adb0-5647446fcbba name=/runtime.v1.RuntimeService/Version
	Sep 19 23:54:50 default-k8s-diff-port-304197 crio[882]: time="2025-09-19 23:54:50.485378202Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f0ba119f-655d-44e6-adb0-5647446fcbba name=/runtime.v1.RuntimeService/Version
	Sep 19 23:54:50 default-k8s-diff-port-304197 crio[882]: time="2025-09-19 23:54:50.486969067Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b4fe41b5-2223-496e-8bf6-604e57b46da7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 23:54:50 default-k8s-diff-port-304197 crio[882]: time="2025-09-19 23:54:50.487555542Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758326090487530529,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176956,},InodesUsed:&UInt64Value{Value:65,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b4fe41b5-2223-496e-8bf6-604e57b46da7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 23:54:50 default-k8s-diff-port-304197 crio[882]: time="2025-09-19 23:54:50.488361191Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cd5c54cd-5e80-43b1-9442-ae03117f424e name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 23:54:50 default-k8s-diff-port-304197 crio[882]: time="2025-09-19 23:54:50.488746646Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cd5c54cd-5e80-43b1-9442-ae03117f424e name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 23:54:50 default-k8s-diff-port-304197 crio[882]: time="2025-09-19 23:54:50.489575285Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0612c7330c35c048f4c63d0dfea394321ccf6c624791adbf22742280a6e79187,PodSandboxId:7dc46f2e1c62053024540103a673c22fa46949c4bd447d6861b210bb30b60eff,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1758325979633593745,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-6ffb444bf9-mwz54,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 911a6b3c-3441-40a6-ac54-04cf424c179b,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.k
ubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ec2522d96a5ed9e24e9ecf32e0d0c252795dcd9c98c44a5b6a16b9ea4a1a9e2,PodSandboxId:206391a3986c3bb63b39cf935fc260351d4023074f7b7aba696e1594757a1cef,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1758325038465690059,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 874b675f-ecbe-4052-a6fb-bc7a6028db03,},Annotations
:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:375a7d2fd8f93e60f9c3abc0d93eabc7ac7f390406b748cd91b0f70fa45e969d,PodSandboxId:1b559b4a4e82ced80c6bfe6ad4227f9e84668422a12a82317372e8b4f4c11dab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1758325025436851219,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3321717a-b901-415e-b199-977471c0ff1f,},Annotations:map[string]
string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34682f7775d4f0930f8412d24f895144c1151ebff6abf2b263cad9f204138b14,PodSandboxId:1de7d351999a23a7f5726463eb34fd9c478fec111f541435517cd97a2cc3a5a5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1758325001966050992,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-qxgj9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6340754-da46-4e31-9f54-feec6a797beb,},Annotations:map[string]string{io.kubernetes.container.
hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74a89ff1d5c44294b0a6b690b20f0b8f8fc88e97e56917b2944fb036c040abdb,PodSandboxId:da97ab890a843eb13424019f4a031b251cba4d13263e684590ede4a3203ac1b7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b
97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1758324994368240805,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hr2bk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02b8b6af-3927-4e0c-a567-28aca5e8cd79,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3436dea0618688158ab7ae5e858bc14d034997b2f4f38b97b32b4274515a49ce,PodSandboxId:1b559b4a4e82ced80c6bfe6ad4227f9e84668422a12a82317372e8b4f4c11dab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_EXITED,CreatedAt:1758324994446061048,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3321717a-b901-415e-b199-977471c0ff1f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e5e899743d432ddcb109979a1ab6e43a88582b10f19fce066e46d5777c20fd2,PodSandboxId:f773b827713b4e5ececda93e3a2c843c58d602181aa64c2ea25ab84b0029e3eb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTA
INER_RUNNING,CreatedAt:1758324988850025144,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-304197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b8cf1138d801c35b4e4cb07a863160e,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b446ba7b5e7f19fe9fc931d0c6005d9015217d26588293b0f5ebfba7a46b9f14,PodSandboxId:6a793c0aa0569cdec519d3e4db356bf00a6f008c3d4825fc63c467b421b96247,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[
string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1758324988840674551,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-304197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d5c2a791d209fbb8e019f27ce69c24a,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c7b58f5c9dbb6070a2649d925cf0f5e747933442ed0c1077d4e04d65cde0aa5,PodSandboxId:ee9c4e628431cbb4790bcad695abc86d7bad9b9d0133901d3f8e7af771bf2b5b,Metadata:&ContainerMetadata{Name:etcd,Attem
pt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1758324988805933722,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-304197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a53b848b476e69d25c1f04609257642b,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:599395f360461d56196f5088ecd3da31a52b32cd771d2c1745f745d0a8515293,PodSandboxId:68cb9daeb17d4acdf5a
36f6b60a2967df65ea3cc7501a6efe96b06776ecd4bbb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1758324988741030233,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-304197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aaf7793ef540d751420bc805bb28b292,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8444,\"containerPort\":8444,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod:
30,},},},}" file="otel-collector/interceptors.go:74" id=cd5c54cd-5e80-43b1-9442-ae03117f424e name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 23:54:50 default-k8s-diff-port-304197 crio[882]: time="2025-09-19 23:54:50.529161249Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=92f2e185-cd85-4bde-9ecc-e956d4ab6cbf name=/runtime.v1.RuntimeService/Version
	Sep 19 23:54:50 default-k8s-diff-port-304197 crio[882]: time="2025-09-19 23:54:50.529259213Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=92f2e185-cd85-4bde-9ecc-e956d4ab6cbf name=/runtime.v1.RuntimeService/Version
	Sep 19 23:54:50 default-k8s-diff-port-304197 crio[882]: time="2025-09-19 23:54:50.530986377Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fac53f46-6c20-4f69-a8e8-a95ae68b18b1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 23:54:50 default-k8s-diff-port-304197 crio[882]: time="2025-09-19 23:54:50.531515161Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758326090531490964,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176956,},InodesUsed:&UInt64Value{Value:65,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fac53f46-6c20-4f69-a8e8-a95ae68b18b1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 23:54:50 default-k8s-diff-port-304197 crio[882]: time="2025-09-19 23:54:50.532107321Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f90d862a-1b4c-46b9-bfe1-c0a61ff061f5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 23:54:50 default-k8s-diff-port-304197 crio[882]: time="2025-09-19 23:54:50.532170550Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f90d862a-1b4c-46b9-bfe1-c0a61ff061f5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 23:54:50 default-k8s-diff-port-304197 crio[882]: time="2025-09-19 23:54:50.532633728Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0612c7330c35c048f4c63d0dfea394321ccf6c624791adbf22742280a6e79187,PodSandboxId:7dc46f2e1c62053024540103a673c22fa46949c4bd447d6861b210bb30b60eff,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1758325979633593745,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-6ffb444bf9-mwz54,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 911a6b3c-3441-40a6-ac54-04cf424c179b,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.k
ubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ec2522d96a5ed9e24e9ecf32e0d0c252795dcd9c98c44a5b6a16b9ea4a1a9e2,PodSandboxId:206391a3986c3bb63b39cf935fc260351d4023074f7b7aba696e1594757a1cef,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1758325038465690059,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 874b675f-ecbe-4052-a6fb-bc7a6028db03,},Annotations
:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:375a7d2fd8f93e60f9c3abc0d93eabc7ac7f390406b748cd91b0f70fa45e969d,PodSandboxId:1b559b4a4e82ced80c6bfe6ad4227f9e84668422a12a82317372e8b4f4c11dab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1758325025436851219,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3321717a-b901-415e-b199-977471c0ff1f,},Annotations:map[string]
string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34682f7775d4f0930f8412d24f895144c1151ebff6abf2b263cad9f204138b14,PodSandboxId:1de7d351999a23a7f5726463eb34fd9c478fec111f541435517cd97a2cc3a5a5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1758325001966050992,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-qxgj9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6340754-da46-4e31-9f54-feec6a797beb,},Annotations:map[string]string{io.kubernetes.container.
hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74a89ff1d5c44294b0a6b690b20f0b8f8fc88e97e56917b2944fb036c040abdb,PodSandboxId:da97ab890a843eb13424019f4a031b251cba4d13263e684590ede4a3203ac1b7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b
97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1758324994368240805,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hr2bk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02b8b6af-3927-4e0c-a567-28aca5e8cd79,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3436dea0618688158ab7ae5e858bc14d034997b2f4f38b97b32b4274515a49ce,PodSandboxId:1b559b4a4e82ced80c6bfe6ad4227f9e84668422a12a82317372e8b4f4c11dab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_EXITED,CreatedAt:1758324994446061048,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3321717a-b901-415e-b199-977471c0ff1f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e5e899743d432ddcb109979a1ab6e43a88582b10f19fce066e46d5777c20fd2,PodSandboxId:f773b827713b4e5ececda93e3a2c843c58d602181aa64c2ea25ab84b0029e3eb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTA
INER_RUNNING,CreatedAt:1758324988850025144,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-304197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b8cf1138d801c35b4e4cb07a863160e,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b446ba7b5e7f19fe9fc931d0c6005d9015217d26588293b0f5ebfba7a46b9f14,PodSandboxId:6a793c0aa0569cdec519d3e4db356bf00a6f008c3d4825fc63c467b421b96247,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[
string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1758324988840674551,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-304197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d5c2a791d209fbb8e019f27ce69c24a,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c7b58f5c9dbb6070a2649d925cf0f5e747933442ed0c1077d4e04d65cde0aa5,PodSandboxId:ee9c4e628431cbb4790bcad695abc86d7bad9b9d0133901d3f8e7af771bf2b5b,Metadata:&ContainerMetadata{Name:etcd,Attem
pt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1758324988805933722,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-304197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a53b848b476e69d25c1f04609257642b,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:599395f360461d56196f5088ecd3da31a52b32cd771d2c1745f745d0a8515293,PodSandboxId:68cb9daeb17d4acdf5a
36f6b60a2967df65ea3cc7501a6efe96b06776ecd4bbb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1758324988741030233,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-304197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aaf7793ef540d751420bc805bb28b292,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8444,\"containerPort\":8444,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod:
30,},},},}" file="otel-collector/interceptors.go:74" id=f90d862a-1b4c-46b9-bfe1-c0a61ff061f5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 23:54:50 default-k8s-diff-port-304197 crio[882]: time="2025-09-19 23:54:50.573218936Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=312b1deb-16af-417b-b9a8-b9bf65337b27 name=/runtime.v1.RuntimeService/Version
	Sep 19 23:54:50 default-k8s-diff-port-304197 crio[882]: time="2025-09-19 23:54:50.573396926Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=312b1deb-16af-417b-b9a8-b9bf65337b27 name=/runtime.v1.RuntimeService/Version
	Sep 19 23:54:50 default-k8s-diff-port-304197 crio[882]: time="2025-09-19 23:54:50.575721081Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=91ab1a3c-91dc-40bd-bfab-d8d6cfa99471 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 23:54:50 default-k8s-diff-port-304197 crio[882]: time="2025-09-19 23:54:50.576531907Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1758326090576506062,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176956,},InodesUsed:&UInt64Value{Value:65,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=91ab1a3c-91dc-40bd-bfab-d8d6cfa99471 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 19 23:54:50 default-k8s-diff-port-304197 crio[882]: time="2025-09-19 23:54:50.577242546Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5457a029-4c22-4107-b4cf-d54adab6e2b1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 23:54:50 default-k8s-diff-port-304197 crio[882]: time="2025-09-19 23:54:50.577388775Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5457a029-4c22-4107-b4cf-d54adab6e2b1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 19 23:54:50 default-k8s-diff-port-304197 crio[882]: time="2025-09-19 23:54:50.577649913Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0612c7330c35c048f4c63d0dfea394321ccf6c624791adbf22742280a6e79187,PodSandboxId:7dc46f2e1c62053024540103a673c22fa46949c4bd447d6861b210bb30b60eff,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1758325979633593745,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-6ffb444bf9-mwz54,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 911a6b3c-3441-40a6-ac54-04cf424c179b,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.k
ubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ec2522d96a5ed9e24e9ecf32e0d0c252795dcd9c98c44a5b6a16b9ea4a1a9e2,PodSandboxId:206391a3986c3bb63b39cf935fc260351d4023074f7b7aba696e1594757a1cef,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1758325038465690059,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 874b675f-ecbe-4052-a6fb-bc7a6028db03,},Annotations
:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:375a7d2fd8f93e60f9c3abc0d93eabc7ac7f390406b748cd91b0f70fa45e969d,PodSandboxId:1b559b4a4e82ced80c6bfe6ad4227f9e84668422a12a82317372e8b4f4c11dab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1758325025436851219,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3321717a-b901-415e-b199-977471c0ff1f,},Annotations:map[string]
string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34682f7775d4f0930f8412d24f895144c1151ebff6abf2b263cad9f204138b14,PodSandboxId:1de7d351999a23a7f5726463eb34fd9c478fec111f541435517cd97a2cc3a5a5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1758325001966050992,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-qxgj9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6340754-da46-4e31-9f54-feec6a797beb,},Annotations:map[string]string{io.kubernetes.container.
hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74a89ff1d5c44294b0a6b690b20f0b8f8fc88e97e56917b2944fb036c040abdb,PodSandboxId:da97ab890a843eb13424019f4a031b251cba4d13263e684590ede4a3203ac1b7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:df0860106674df871eebbd01fede90c764bf472f5b
97eca7e945761292e9b0ce,State:CONTAINER_RUNNING,CreatedAt:1758324994368240805,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hr2bk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02b8b6af-3927-4e0c-a567-28aca5e8cd79,},Annotations:map[string]string{io.kubernetes.container.hash: e2e56a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3436dea0618688158ab7ae5e858bc14d034997b2f4f38b97b32b4274515a49ce,PodSandboxId:1b559b4a4e82ced80c6bfe6ad4227f9e84668422a12a82317372e8b4f4c11dab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_EXITED,CreatedAt:1758324994446061048,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3321717a-b901-415e-b199-977471c0ff1f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e5e899743d432ddcb109979a1ab6e43a88582b10f19fce066e46d5777c20fd2,PodSandboxId:f773b827713b4e5ececda93e3a2c843c58d602181aa64c2ea25ab84b0029e3eb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634,State:CONTA
INER_RUNNING,CreatedAt:1758324988850025144,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-304197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b8cf1138d801c35b4e4cb07a863160e,},Annotations:map[string]string{io.kubernetes.container.hash: 7eaa1830,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b446ba7b5e7f19fe9fc931d0c6005d9015217d26588293b0f5ebfba7a46b9f14,PodSandboxId:6a793c0aa0569cdec519d3e4db356bf00a6f008c3d4825fc63c467b421b96247,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,Annotations:map[
string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc,State:CONTAINER_RUNNING,CreatedAt:1758324988840674551,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-304197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d5c2a791d209fbb8e019f27ce69c24a,},Annotations:map[string]string{io.kubernetes.container.hash: 85eae708,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c7b58f5c9dbb6070a2649d925cf0f5e747933442ed0c1077d4e04d65cde0aa5,PodSandboxId:ee9c4e628431cbb4790bcad695abc86d7bad9b9d0133901d3f8e7af771bf2b5b,Metadata:&ContainerMetadata{Name:etcd,Attem
pt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1758324988805933722,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-304197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a53b848b476e69d25c1f04609257642b,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:599395f360461d56196f5088ecd3da31a52b32cd771d2c1745f745d0a8515293,PodSandboxId:68cb9daeb17d4acdf5a
36f6b60a2967df65ea3cc7501a6efe96b06776ecd4bbb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90,State:CONTAINER_RUNNING,CreatedAt:1758324988741030233,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-304197,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aaf7793ef540d751420bc805bb28b292,},Annotations:map[string]string{io.kubernetes.container.hash: d671eaa0,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8444,\"containerPort\":8444,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod:
30,},},},}" file="otel-collector/interceptors.go:74" id=5457a029-4c22-4107-b4cf-d54adab6e2b1 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                        ATTEMPT             POD ID              POD
	0612c7330c35c       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                      About a minute ago   Exited              dashboard-metrics-scraper   8                   7dc46f2e1c620       dashboard-metrics-scraper-6ffb444bf9-mwz54
	6ec2522d96a5e       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   17 minutes ago       Running             busybox                     1                   206391a3986c3       busybox
	375a7d2fd8f93       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      17 minutes ago       Running             storage-provisioner         3                   1b559b4a4e82c       storage-provisioner
	34682f7775d4f       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      18 minutes ago       Running             coredns                     1                   1de7d351999a2       coredns-66bc5c9577-qxgj9
	3436dea061868       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      18 minutes ago       Exited              storage-provisioner         2                   1b559b4a4e82c       storage-provisioner
	74a89ff1d5c44       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                      18 minutes ago       Running             kube-proxy                  1                   da97ab890a843       kube-proxy-hr2bk
	8e5e899743d43       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                      18 minutes ago       Running             kube-controller-manager     1                   f773b827713b4       kube-controller-manager-default-k8s-diff-port-304197
	b446ba7b5e7f1       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                      18 minutes ago       Running             kube-scheduler              1                   6a793c0aa0569       kube-scheduler-default-k8s-diff-port-304197
	2c7b58f5c9dbb       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      18 minutes ago       Running             etcd                        1                   ee9c4e628431c       etcd-default-k8s-diff-port-304197
	599395f360461       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                      18 minutes ago       Running             kube-apiserver              1                   68cb9daeb17d4       kube-apiserver-default-k8s-diff-port-304197
	
	
	==> coredns [34682f7775d4f0930f8412d24f895144c1151ebff6abf2b263cad9f204138b14] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42182 - 16622 "HINFO IN 6384841599668461451.6666792259869803210. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.06272829s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-304197
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-304197
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=default-k8s-diff-port-304197
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_19T23_33_14_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 23:33:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-304197
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 23:54:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 23:53:23 +0000   Fri, 19 Sep 2025 23:33:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 23:53:23 +0000   Fri, 19 Sep 2025 23:33:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 23:53:23 +0000   Fri, 19 Sep 2025 23:33:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 23:53:23 +0000   Fri, 19 Sep 2025 23:36:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.80
	  Hostname:    default-k8s-diff-port-304197
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 3bba7fc49fe94e02803c493aa434f4d2
	  System UUID:                3bba7fc4-9fe9-4e02-803c-493aa434f4d2
	  Boot ID:                    4b56d174-1dea-46ec-8824-157e27d5086d
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 coredns-66bc5c9577-qxgj9                                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     21m
	  kube-system                 etcd-default-k8s-diff-port-304197                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         21m
	  kube-system                 kube-apiserver-default-k8s-diff-port-304197             250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-304197    200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-hr2bk                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-default-k8s-diff-port-304197             100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 metrics-server-746fcd58dc-7rhgt                         100m (5%)     0 (0%)      200Mi (6%)       0 (0%)         20m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-mwz54              0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-dscz6                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (12%)  170Mi (5%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 21m                kube-proxy       
	  Normal   Starting                 18m                kube-proxy       
	  Normal   Starting                 21m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-304197 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-304197 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node default-k8s-diff-port-304197 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    21m                kubelet          Node default-k8s-diff-port-304197 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  21m                kubelet          Node default-k8s-diff-port-304197 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     21m                kubelet          Node default-k8s-diff-port-304197 status is now: NodeHasSufficientPID
	  Normal   Starting                 21m                kubelet          Starting kubelet.
	  Normal   NodeReady                21m                kubelet          Node default-k8s-diff-port-304197 status is now: NodeReady
	  Normal   RegisteredNode           21m                node-controller  Node default-k8s-diff-port-304197 event: Registered Node default-k8s-diff-port-304197 in Controller
	  Normal   Starting                 18m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node default-k8s-diff-port-304197 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node default-k8s-diff-port-304197 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     18m (x7 over 18m)  kubelet          Node default-k8s-diff-port-304197 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 18m                kubelet          Node default-k8s-diff-port-304197 has been rebooted, boot id: 4b56d174-1dea-46ec-8824-157e27d5086d
	  Normal   RegisteredNode           18m                node-controller  Node default-k8s-diff-port-304197 event: Registered Node default-k8s-diff-port-304197 in Controller
	
	
	==> dmesg <==
	[Sep19 23:36] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000049] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.000768] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.831113] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000018] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.161296] kauditd_printk_skb: 60 callbacks suppressed
	[  +0.141672] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.686796] kauditd_printk_skb: 168 callbacks suppressed
	[  +0.347923] kauditd_printk_skb: 161 callbacks suppressed
	[  +2.850380] kauditd_printk_skb: 182 callbacks suppressed
	[Sep19 23:37] kauditd_printk_skb: 56 callbacks suppressed
	[ +12.043446] kauditd_printk_skb: 21 callbacks suppressed
	[ +14.180570] kauditd_printk_skb: 11 callbacks suppressed
	[Sep19 23:38] kauditd_printk_skb: 6 callbacks suppressed
	[Sep19 23:39] kauditd_printk_skb: 6 callbacks suppressed
	[Sep19 23:42] kauditd_printk_skb: 6 callbacks suppressed
	[Sep19 23:47] kauditd_printk_skb: 6 callbacks suppressed
	[Sep19 23:52] kauditd_printk_skb: 6 callbacks suppressed
	
	
	==> etcd [2c7b58f5c9dbb6070a2649d925cf0f5e747933442ed0c1077d4e04d65cde0aa5] <==
	{"level":"warn","ts":"2025-09-19T23:36:31.277406Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:36:31.293390Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:36:31.345391Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:36:31.370137Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:36:31.394072Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:36:31.417504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:36:31.442683Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:36:31.462824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:36:31.481089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:36:31.504181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:36:31.521012Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:36:31.544462Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:36:31.575234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:36:31.610685Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:36:31.637822Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:36:31.784014Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:36:42.192004Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"131.415877ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-qxgj9\" limit:1 ","response":"range_response_count:1 size:5464"}
	{"level":"info","ts":"2025-09-19T23:36:42.192117Z","caller":"traceutil/trace.go:172","msg":"trace[33012178] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-qxgj9; range_end:; response_count:1; response_revision:674; }","duration":"131.564339ms","start":"2025-09-19T23:36:42.060540Z","end":"2025-09-19T23:36:42.192105Z","steps":["trace[33012178] 'range keys from in-memory index tree'  (duration: 131.025228ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-19T23:36:44.628632Z","caller":"traceutil/trace.go:172","msg":"trace[1541362267] transaction","detail":"{read_only:false; response_revision:693; number_of_response:1; }","duration":"141.674497ms","start":"2025-09-19T23:36:44.486944Z","end":"2025-09-19T23:36:44.628619Z","steps":["trace[1541362267] 'process raft request'  (duration: 141.549801ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-19T23:46:30.264902Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1050}
	{"level":"info","ts":"2025-09-19T23:46:30.290785Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1050,"took":"25.426812ms","hash":2456768064,"current-db-size-bytes":3272704,"current-db-size":"3.3 MB","current-db-size-in-use-bytes":1359872,"current-db-size-in-use":"1.4 MB"}
	{"level":"info","ts":"2025-09-19T23:46:30.290913Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2456768064,"revision":1050,"compact-revision":-1}
	{"level":"info","ts":"2025-09-19T23:51:30.271769Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1337}
	{"level":"info","ts":"2025-09-19T23:51:30.276763Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1337,"took":"4.496266ms","hash":2189398233,"current-db-size-bytes":3272704,"current-db-size":"3.3 MB","current-db-size-in-use-bytes":1851392,"current-db-size-in-use":"1.9 MB"}
	{"level":"info","ts":"2025-09-19T23:51:30.276834Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2189398233,"revision":1337,"compact-revision":1050}
	
	
	==> kernel <==
	 23:54:50 up 18 min,  0 users,  load average: 0.12, 0.19, 0.18
	Linux default-k8s-diff-port-304197 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Sep  9 02:24:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [599395f360461d56196f5088ecd3da31a52b32cd771d2c1745f745d0a8515293] <==
	 > logger="UnhandledError"
	I0919 23:51:33.965151       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0919 23:52:19.013164       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0919 23:52:33.965059       1 handler_proxy.go:99] no RequestInfo found in the context
	E0919 23:52:33.965128       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0919 23:52:33.965185       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0919 23:52:33.965506       1 handler_proxy.go:99] no RequestInfo found in the context
	E0919 23:52:33.965590       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0919 23:52:33.968999       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0919 23:52:39.377180       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 23:53:30.809593       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 23:53:57.492842       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0919 23:54:33.965442       1 handler_proxy.go:99] no RequestInfo found in the context
	E0919 23:54:33.965510       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0919 23:54:33.965526       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0919 23:54:33.969119       1 handler_proxy.go:99] no RequestInfo found in the context
	E0919 23:54:33.969738       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0919 23:54:33.969770       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0919 23:54:34.185799       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [8e5e899743d432ddcb109979a1ab6e43a88582b10f19fce066e46d5777c20fd2] <==
	I0919 23:48:38.709785       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0919 23:49:08.562671       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0919 23:49:08.718034       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0919 23:49:38.567935       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0919 23:49:38.727246       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0919 23:50:08.572860       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0919 23:50:08.734924       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0919 23:50:38.578088       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0919 23:50:38.742452       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0919 23:51:08.583409       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0919 23:51:08.750676       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0919 23:51:38.590456       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0919 23:51:38.760693       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0919 23:52:08.595043       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0919 23:52:08.769220       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0919 23:52:38.599948       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0919 23:52:38.777631       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0919 23:53:08.604473       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0919 23:53:08.786372       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0919 23:53:38.609779       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0919 23:53:38.796603       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0919 23:54:08.615116       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0919 23:54:08.805213       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0919 23:54:38.620887       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0919 23:54:38.815751       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [74a89ff1d5c44294b0a6b690b20f0b8f8fc88e97e56917b2944fb036c040abdb] <==
	I0919 23:36:34.923447       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0919 23:36:35.024261       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0919 23:36:35.024564       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.80"]
	E0919 23:36:35.025502       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 23:36:35.078609       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I0919 23:36:35.078750       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0919 23:36:35.078847       1 server_linux.go:132] "Using iptables Proxier"
	I0919 23:36:35.094144       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 23:36:35.094604       1 server.go:527] "Version info" version="v1.34.0"
	I0919 23:36:35.094654       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 23:36:35.100126       1 config.go:200] "Starting service config controller"
	I0919 23:36:35.100194       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0919 23:36:35.100227       1 config.go:106] "Starting endpoint slice config controller"
	I0919 23:36:35.100241       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0919 23:36:35.100261       1 config.go:403] "Starting serviceCIDR config controller"
	I0919 23:36:35.100353       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0919 23:36:35.107078       1 config.go:309] "Starting node config controller"
	I0919 23:36:35.107588       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0919 23:36:35.107708       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0919 23:36:35.200954       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0919 23:36:35.200955       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0919 23:36:35.200978       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [b446ba7b5e7f19fe9fc931d0c6005d9015217d26588293b0f5ebfba7a46b9f14] <==
	I0919 23:36:32.885210       1 serving.go:386] Generated self-signed cert in-memory
	I0919 23:36:34.500260       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0919 23:36:34.504506       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 23:36:34.516260       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0919 23:36:34.516864       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0919 23:36:34.516963       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I0919 23:36:34.517024       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0919 23:36:34.518422       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 23:36:34.520357       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 23:36:34.520456       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0919 23:36:34.520469       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0919 23:36:34.617452       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I0919 23:36:34.621011       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0919 23:36:34.621075       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 19 23:54:07 default-k8s-diff-port-304197 kubelet[1212]: E0919 23:54:07.879099    1212 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758326047878568115  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Sep 19 23:54:07 default-k8s-diff-port-304197 kubelet[1212]: E0919 23:54:07.879623    1212 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758326047878568115  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Sep 19 23:54:08 default-k8s-diff-port-304197 kubelet[1212]: E0919 23:54:08.618042    1212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-7rhgt" podUID="64e629d6-5d4b-49e5-ac73-5a67b6f877b4"
	Sep 19 23:54:09 default-k8s-diff-port-304197 kubelet[1212]: I0919 23:54:09.616608    1212 scope.go:117] "RemoveContainer" containerID="0612c7330c35c048f4c63d0dfea394321ccf6c624791adbf22742280a6e79187"
	Sep 19 23:54:09 default-k8s-diff-port-304197 kubelet[1212]: E0919 23:54:09.616757    1212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-mwz54_kubernetes-dashboard(911a6b3c-3441-40a6-ac54-04cf424c179b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mwz54" podUID="911a6b3c-3441-40a6-ac54-04cf424c179b"
	Sep 19 23:54:17 default-k8s-diff-port-304197 kubelet[1212]: E0919 23:54:17.618698    1212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: copying system image from manifest list: determining manifest MIME type for docker://kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dscz6" podUID="92d8e2bb-d9b6-4e61-8313-c3c386feb5dd"
	Sep 19 23:54:17 default-k8s-diff-port-304197 kubelet[1212]: E0919 23:54:17.882365    1212 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758326057881921531  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Sep 19 23:54:17 default-k8s-diff-port-304197 kubelet[1212]: E0919 23:54:17.882405    1212 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758326057881921531  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Sep 19 23:54:21 default-k8s-diff-port-304197 kubelet[1212]: E0919 23:54:21.621589    1212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-7rhgt" podUID="64e629d6-5d4b-49e5-ac73-5a67b6f877b4"
	Sep 19 23:54:24 default-k8s-diff-port-304197 kubelet[1212]: I0919 23:54:24.616569    1212 scope.go:117] "RemoveContainer" containerID="0612c7330c35c048f4c63d0dfea394321ccf6c624791adbf22742280a6e79187"
	Sep 19 23:54:24 default-k8s-diff-port-304197 kubelet[1212]: E0919 23:54:24.616782    1212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-mwz54_kubernetes-dashboard(911a6b3c-3441-40a6-ac54-04cf424c179b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mwz54" podUID="911a6b3c-3441-40a6-ac54-04cf424c179b"
	Sep 19 23:54:27 default-k8s-diff-port-304197 kubelet[1212]: E0919 23:54:27.883959    1212 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758326067883645578  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Sep 19 23:54:27 default-k8s-diff-port-304197 kubelet[1212]: E0919 23:54:27.883981    1212 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758326067883645578  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Sep 19 23:54:29 default-k8s-diff-port-304197 kubelet[1212]: E0919 23:54:29.618875    1212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: copying system image from manifest list: determining manifest MIME type for docker://kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dscz6" podUID="92d8e2bb-d9b6-4e61-8313-c3c386feb5dd"
	Sep 19 23:54:36 default-k8s-diff-port-304197 kubelet[1212]: I0919 23:54:36.616126    1212 scope.go:117] "RemoveContainer" containerID="0612c7330c35c048f4c63d0dfea394321ccf6c624791adbf22742280a6e79187"
	Sep 19 23:54:36 default-k8s-diff-port-304197 kubelet[1212]: E0919 23:54:36.616348    1212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-mwz54_kubernetes-dashboard(911a6b3c-3441-40a6-ac54-04cf424c179b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mwz54" podUID="911a6b3c-3441-40a6-ac54-04cf424c179b"
	Sep 19 23:54:36 default-k8s-diff-port-304197 kubelet[1212]: E0919 23:54:36.618163    1212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-7rhgt" podUID="64e629d6-5d4b-49e5-ac73-5a67b6f877b4"
	Sep 19 23:54:37 default-k8s-diff-port-304197 kubelet[1212]: E0919 23:54:37.886248    1212 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758326077885870034  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Sep 19 23:54:37 default-k8s-diff-port-304197 kubelet[1212]: E0919 23:54:37.886339    1212 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758326077885870034  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Sep 19 23:54:40 default-k8s-diff-port-304197 kubelet[1212]: E0919 23:54:40.618154    1212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: copying system image from manifest list: determining manifest MIME type for docker://kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dscz6" podUID="92d8e2bb-d9b6-4e61-8313-c3c386feb5dd"
	Sep 19 23:54:47 default-k8s-diff-port-304197 kubelet[1212]: E0919 23:54:47.888451    1212 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758326087888119217  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Sep 19 23:54:47 default-k8s-diff-port-304197 kubelet[1212]: E0919 23:54:47.888472    1212 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758326087888119217  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Sep 19 23:54:49 default-k8s-diff-port-304197 kubelet[1212]: E0919 23:54:49.621843    1212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-7rhgt" podUID="64e629d6-5d4b-49e5-ac73-5a67b6f877b4"
	Sep 19 23:54:50 default-k8s-diff-port-304197 kubelet[1212]: I0919 23:54:50.616398    1212 scope.go:117] "RemoveContainer" containerID="0612c7330c35c048f4c63d0dfea394321ccf6c624791adbf22742280a6e79187"
	Sep 19 23:54:50 default-k8s-diff-port-304197 kubelet[1212]: E0919 23:54:50.616555    1212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-mwz54_kubernetes-dashboard(911a6b3c-3441-40a6-ac54-04cf424c179b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mwz54" podUID="911a6b3c-3441-40a6-ac54-04cf424c179b"
	
	
	==> storage-provisioner [3436dea0618688158ab7ae5e858bc14d034997b2f4f38b97b32b4274515a49ce] <==
	I0919 23:36:34.715204       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0919 23:37:04.728500       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [375a7d2fd8f93e60f9c3abc0d93eabc7ac7f390406b748cd91b0f70fa45e969d] <==
	W0919 23:54:26.650932       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:54:28.653878       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:54:28.660203       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:54:30.664476       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:54:30.674704       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:54:32.678764       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:54:32.683798       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:54:34.687900       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:54:34.693967       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:54:36.697344       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:54:36.703124       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:54:38.706246       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:54:38.712160       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:54:40.715766       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:54:40.726144       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:54:42.729685       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:54:42.734875       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:54:44.739012       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:54:44.745231       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:54:46.749165       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:54:46.755979       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:54:48.759791       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:54:48.769800       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:54:50.775713       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:54:50.782517       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-304197 -n default-k8s-diff-port-304197
E0919 23:54:51.433463   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/calico-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-304197 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-7rhgt kubernetes-dashboard-855c9754f9-dscz6
helpers_test.go:282: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context default-k8s-diff-port-304197 describe pod metrics-server-746fcd58dc-7rhgt kubernetes-dashboard-855c9754f9-dscz6
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-304197 describe pod metrics-server-746fcd58dc-7rhgt kubernetes-dashboard-855c9754f9-dscz6: exit status 1 (60.182002ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-7rhgt" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-dscz6" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context default-k8s-diff-port-304197 describe pod metrics-server-746fcd58dc-7rhgt kubernetes-dashboard-855c9754f9-dscz6: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (542.68s)

                                                
                                    

Test pass (278/330)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 7.7
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.06
9 TestDownloadOnly/v1.28.0/DeleteAll 0.15
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.0/json-events 4.53
13 TestDownloadOnly/v1.34.0/preload-exists 0
17 TestDownloadOnly/v1.34.0/LogsDuration 0.06
18 TestDownloadOnly/v1.34.0/DeleteAll 0.14
19 TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.64
22 TestOffline 108.04
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 137.5
31 TestAddons/serial/GCPAuth/Namespaces 0.18
32 TestAddons/serial/GCPAuth/FakeCredentials 8.54
35 TestAddons/parallel/Registry 15.09
36 TestAddons/parallel/RegistryCreds 1.03
38 TestAddons/parallel/InspektorGadget 6.41
39 TestAddons/parallel/MetricsServer 6.97
41 TestAddons/parallel/CSI 45.73
42 TestAddons/parallel/Headlamp 21.52
43 TestAddons/parallel/CloudSpanner 5.7
44 TestAddons/parallel/LocalPath 53.85
45 TestAddons/parallel/NvidiaDevicePlugin 7.04
46 TestAddons/parallel/Yakd 12.54
48 TestAddons/StoppedEnableDisable 89.07
49 TestCertOptions 73.36
50 TestCertExpiration 271.21
52 TestForceSystemdFlag 63.38
53 TestForceSystemdEnv 46.03
55 TestKVMDriverInstallOrUpdate 1.15
59 TestErrorSpam/setup 40.43
60 TestErrorSpam/start 0.33
61 TestErrorSpam/status 0.8
62 TestErrorSpam/pause 1.82
63 TestErrorSpam/unpause 1.99
64 TestErrorSpam/stop 5.24
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 78.67
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 28.91
71 TestFunctional/serial/KubeContext 0.05
72 TestFunctional/serial/KubectlGetPods 0.1
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.51
76 TestFunctional/serial/CacheCmd/cache/add_local 1.53
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.79
81 TestFunctional/serial/CacheCmd/cache/delete 0.1
82 TestFunctional/serial/MinikubeKubectlCmd 0.11
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
84 TestFunctional/serial/ExtraConfig 42.57
85 TestFunctional/serial/ComponentHealth 0.07
86 TestFunctional/serial/LogsCmd 1.51
87 TestFunctional/serial/LogsFileCmd 1.55
88 TestFunctional/serial/InvalidService 4.12
90 TestFunctional/parallel/ConfigCmd 0.32
92 TestFunctional/parallel/DryRun 0.26
93 TestFunctional/parallel/InternationalLanguage 0.14
94 TestFunctional/parallel/StatusCmd 0.8
99 TestFunctional/parallel/AddonsCmd 0.12
102 TestFunctional/parallel/SSHCmd 0.42
103 TestFunctional/parallel/CpCmd 1.4
105 TestFunctional/parallel/FileSync 0.24
106 TestFunctional/parallel/CertSync 1.36
110 TestFunctional/parallel/NodeLabels 0.07
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.45
114 TestFunctional/parallel/License 0.34
115 TestFunctional/parallel/Version/short 0.06
116 TestFunctional/parallel/Version/components 0.58
126 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
127 TestFunctional/parallel/ImageCommands/ImageListTable 0.26
128 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
129 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
130 TestFunctional/parallel/ImageCommands/ImageBuild 2.77
131 TestFunctional/parallel/ImageCommands/Setup 0.99
132 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
133 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
134 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
135 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.54
136 TestFunctional/parallel/ProfileCmd/profile_not_create 0.38
137 TestFunctional/parallel/ProfileCmd/profile_list 0.36
138 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.94
139 TestFunctional/parallel/ProfileCmd/profile_json_output 0.34
140 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.26
141 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.52
142 TestFunctional/parallel/ImageCommands/ImageRemove 0.54
143 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.81
144 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.57
146 TestFunctional/parallel/MountCmd/any-port 97.31
147 TestFunctional/parallel/MountCmd/specific-port 1.62
148 TestFunctional/parallel/MountCmd/VerifyCleanup 1.5
149 TestFunctional/parallel/ServiceCmd/List 1.24
150 TestFunctional/parallel/ServiceCmd/JSONOutput 1.24
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
161 TestMultiControlPlane/serial/StartCluster 256.45
162 TestMultiControlPlane/serial/DeployApp 5.4
163 TestMultiControlPlane/serial/PingHostFromPods 1.24
164 TestMultiControlPlane/serial/AddWorkerNode 44.55
165 TestMultiControlPlane/serial/NodeLabels 0.07
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.94
167 TestMultiControlPlane/serial/CopyFile 13.61
168 TestMultiControlPlane/serial/StopSecondaryNode 80.8
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.71
170 TestMultiControlPlane/serial/RestartSecondaryNode 36.79
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.21
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 386.75
173 TestMultiControlPlane/serial/DeleteSecondaryNode 18.48
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.69
175 TestMultiControlPlane/serial/StopCluster 254.61
176 TestMultiControlPlane/serial/RestartCluster 110.07
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.66
178 TestMultiControlPlane/serial/AddSecondaryNode 103.56
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.95
183 TestJSONOutput/start/Command 85.38
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.86
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.73
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 7.85
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.21
211 TestMainNoArgs 0.05
212 TestMinikubeProfile 88.93
215 TestMountStart/serial/StartWithMountFirst 25.54
216 TestMountStart/serial/VerifyMountFirst 0.37
217 TestMountStart/serial/StartWithMountSecond 21.99
218 TestMountStart/serial/VerifyMountSecond 0.37
219 TestMountStart/serial/DeleteFirst 0.74
220 TestMountStart/serial/VerifyMountPostDelete 0.37
221 TestMountStart/serial/Stop 1.39
222 TestMountStart/serial/RestartStopped 20.14
223 TestMountStart/serial/VerifyMountPostStop 0.38
226 TestMultiNode/serial/FreshStart2Nodes 134.85
227 TestMultiNode/serial/DeployApp2Nodes 4.11
228 TestMultiNode/serial/PingHostFrom2Pods 0.81
229 TestMultiNode/serial/AddNode 44.61
230 TestMultiNode/serial/MultiNodeLabels 0.06
231 TestMultiNode/serial/ProfileList 0.62
232 TestMultiNode/serial/CopyFile 7.43
233 TestMultiNode/serial/StopNode 2.74
234 TestMultiNode/serial/StartAfterStop 38.51
235 TestMultiNode/serial/RestartKeepsNodes 318.4
236 TestMultiNode/serial/DeleteNode 2.8
237 TestMultiNode/serial/StopMultiNode 168.22
238 TestMultiNode/serial/RestartMultiNode 89.45
239 TestMultiNode/serial/ValidateNameConflict 42.77
246 TestScheduledStopUnix 112.82
250 TestRunningBinaryUpgrade 154.48
252 TestKubernetesUpgrade 192.2
254 TestStoppedBinaryUpgrade/Setup 0.55
255 TestStoppedBinaryUpgrade/Upgrade 134.41
256 TestStoppedBinaryUpgrade/MinikubeLogs 1.01
265 TestPause/serial/Start 101.64
267 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
268 TestNoKubernetes/serial/StartWithK8s 67.46
269 TestNoKubernetes/serial/StartWithStopK8s 30.2
270 TestPause/serial/SecondStartNoReconfiguration 40.49
271 TestNoKubernetes/serial/Start 26.18
279 TestNetworkPlugins/group/false 3.17
283 TestNoKubernetes/serial/VerifyK8sNotRunning 0.22
284 TestNoKubernetes/serial/ProfileList 2.52
285 TestPause/serial/Pause 0.97
286 TestPause/serial/VerifyStatus 0.28
287 TestPause/serial/Unpause 0.83
288 TestNoKubernetes/serial/Stop 1.52
289 TestPause/serial/PauseAgain 1.07
290 TestPause/serial/DeletePaused 0.9
291 TestNoKubernetes/serial/StartNoArgs 35.43
292 TestPause/serial/VerifyDeletedResources 0.37
294 TestStartStop/group/old-k8s-version/serial/FirstStart 127.86
295 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
297 TestStartStop/group/no-preload/serial/FirstStart 123.15
299 TestStartStop/group/embed-certs/serial/FirstStart 125.04
300 TestStartStop/group/old-k8s-version/serial/DeployApp 8.39
301 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.25
302 TestStartStop/group/old-k8s-version/serial/Stop 89.6
303 TestStartStop/group/no-preload/serial/DeployApp 8.31
305 TestStartStop/group/newest-cni/serial/FirstStart 45.84
306 TestStartStop/group/embed-certs/serial/DeployApp 8.3
307 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.82
308 TestStartStop/group/no-preload/serial/Stop 83.93
309 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.12
310 TestStartStop/group/embed-certs/serial/Stop 82.62
311 TestStartStop/group/newest-cni/serial/DeployApp 0
312 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.35
313 TestStartStop/group/newest-cni/serial/Stop 11.04
314 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
315 TestStartStop/group/newest-cni/serial/SecondStart 37.39
316 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.22
317 TestStartStop/group/old-k8s-version/serial/SecondStart 55.8
318 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.22
319 TestStartStop/group/no-preload/serial/SecondStart 64.01
320 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
321 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
322 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.29
323 TestStartStop/group/newest-cni/serial/Pause 4.43
324 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.27
325 TestStartStop/group/embed-certs/serial/SecondStart 65.41
327 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 116.29
328 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 15.06
329 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.12
330 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.35
331 TestStartStop/group/old-k8s-version/serial/Pause 3.66
332 TestNetworkPlugins/group/auto/Start 94.88
333 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 17.01
334 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
335 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
336 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.26
337 TestStartStop/group/embed-certs/serial/Pause 3.31
338 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
339 TestNetworkPlugins/group/kindnet/Start 66.46
340 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.33
341 TestStartStop/group/no-preload/serial/Pause 4.06
342 TestNetworkPlugins/group/calico/Start 87.21
343 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.36
344 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.33
345 TestStartStop/group/default-k8s-diff-port/serial/Stop 89.48
346 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
347 TestNetworkPlugins/group/auto/KubeletFlags 0.24
348 TestNetworkPlugins/group/auto/NetCatPod 10.3
349 TestNetworkPlugins/group/kindnet/KubeletFlags 0.21
350 TestNetworkPlugins/group/kindnet/NetCatPod 10.28
351 TestNetworkPlugins/group/auto/DNS 0.17
352 TestNetworkPlugins/group/auto/Localhost 0.15
353 TestNetworkPlugins/group/auto/HairPin 0.15
354 TestNetworkPlugins/group/kindnet/DNS 0.18
355 TestNetworkPlugins/group/kindnet/Localhost 0.15
356 TestNetworkPlugins/group/kindnet/HairPin 0.17
357 TestNetworkPlugins/group/custom-flannel/Start 75.84
358 TestNetworkPlugins/group/calico/ControllerPod 6.01
359 TestNetworkPlugins/group/enable-default-cni/Start 100.25
360 TestNetworkPlugins/group/calico/KubeletFlags 0.23
361 TestNetworkPlugins/group/calico/NetCatPod 12.29
362 TestNetworkPlugins/group/calico/DNS 0.17
363 TestNetworkPlugins/group/calico/Localhost 0.14
364 TestNetworkPlugins/group/calico/HairPin 0.15
365 TestNetworkPlugins/group/flannel/Start 81.13
366 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
367 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 69.83
368 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.23
369 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.28
370 TestNetworkPlugins/group/custom-flannel/DNS 0.18
371 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
372 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
373 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.28
374 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.36
375 TestNetworkPlugins/group/bridge/Start 87.76
377 TestNetworkPlugins/group/enable-default-cni/DNS 0.21
378 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
379 TestNetworkPlugins/group/flannel/ControllerPod 6.01
380 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
381 TestNetworkPlugins/group/flannel/KubeletFlags 0.24
382 TestNetworkPlugins/group/flannel/NetCatPod 11.3
383 TestNetworkPlugins/group/flannel/DNS 0.15
384 TestNetworkPlugins/group/flannel/Localhost 0.15
385 TestNetworkPlugins/group/flannel/HairPin 0.14
386 TestNetworkPlugins/group/bridge/KubeletFlags 0.22
387 TestNetworkPlugins/group/bridge/NetCatPod 10.25
388 TestNetworkPlugins/group/bridge/DNS 0.15
389 TestNetworkPlugins/group/bridge/Localhost 0.13
390 TestNetworkPlugins/group/bridge/HairPin 0.12
392 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.23
393 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.82
x
+
TestDownloadOnly/v1.28.0/json-events (7.7s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-005289 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-005289 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (7.694757596s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (7.70s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I0919 22:14:12.332266   18671 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
I0919 22:14:12.332374   18671 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21594-14764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-005289
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-005289: exit status 85 (59.365144ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                                ARGS                                                                                                 │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-005289 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ download-only-005289 │ jenkins │ v1.37.0 │ 19 Sep 25 22:14 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/19 22:14:04
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 22:14:04.677054   18683 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:14:04.677361   18683 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:14:04.677371   18683 out.go:374] Setting ErrFile to fd 2...
	I0919 22:14:04.677375   18683 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:14:04.677645   18683 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14764/.minikube/bin
	W0919 22:14:04.677830   18683 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21594-14764/.minikube/config/config.json: open /home/jenkins/minikube-integration/21594-14764/.minikube/config/config.json: no such file or directory
	I0919 22:14:04.678363   18683 out.go:368] Setting JSON to true
	I0919 22:14:04.679298   18683 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3372,"bootTime":1758316673,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 22:14:04.679379   18683 start.go:140] virtualization: kvm guest
	I0919 22:14:04.681517   18683 out.go:99] [download-only-005289] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W0919 22:14:04.681652   18683 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/21594-14764/.minikube/cache/preloaded-tarball: no such file or directory
	I0919 22:14:04.681691   18683 notify.go:220] Checking for updates...
	I0919 22:14:04.683213   18683 out.go:171] MINIKUBE_LOCATION=21594
	I0919 22:14:04.684719   18683 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 22:14:04.686117   18683 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21594-14764/kubeconfig
	I0919 22:14:04.687425   18683 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14764/.minikube
	I0919 22:14:04.688577   18683 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0919 22:14:04.690788   18683 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0919 22:14:04.691019   18683 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 22:14:05.199292   18683 out.go:99] Using the kvm2 driver based on user configuration
	I0919 22:14:05.199322   18683 start.go:304] selected driver: kvm2
	I0919 22:14:05.199328   18683 start.go:918] validating driver "kvm2" against <nil>
	I0919 22:14:05.199619   18683 install.go:51] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:14:05.199766   18683 install.go:123] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21594-14764/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0919 22:14:05.215393   18683 install.go:134] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is {Version:v1.37.0 Commit:6e37ee63f758843bb5fe33c3a528c564c4b83d53}
	I0919 22:14:05.215422   18683 install.go:123] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21594-14764/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0919 22:14:05.229427   18683 install.go:134] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is {Version:v1.37.0 Commit:6e37ee63f758843bb5fe33c3a528c564c4b83d53}
	I0919 22:14:05.229476   18683 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0919 22:14:05.230028   18683 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I0919 22:14:05.230199   18683 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0919 22:14:05.230232   18683 cni.go:84] Creating CNI manager for ""
	I0919 22:14:05.230290   18683 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0919 22:14:05.230303   18683 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0919 22:14:05.230370   18683 start.go:348] cluster config:
	{Name:download-only-005289 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-005289 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:14:05.230581   18683 iso.go:125] acquiring lock: {Name:mk21ede999fca7478b081d3e470ef3cc88b140f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:14:05.232473   18683 out.go:99] Downloading VM boot image ...
	I0919 22:14:05.232509   18683 download.go:108] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.37.0-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.37.0-amd64.iso.sha256 -> /home/jenkins/minikube-integration/21594-14764/.minikube/cache/iso/amd64/minikube-v1.37.0-amd64.iso
	I0919 22:14:07.490896   18683 out.go:99] Starting "download-only-005289" primary control-plane node in "download-only-005289" cluster
	I0919 22:14:07.490925   18683 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0919 22:14:07.519147   18683 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I0919 22:14:07.519176   18683 cache.go:58] Caching tarball of preloaded images
	I0919 22:14:07.519341   18683 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0919 22:14:07.520907   18683 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I0919 22:14:07.520925   18683 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 ...
	I0919 22:14:07.552333   18683 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21594-14764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-005289 host does not exist
	  To start a cluster, run: "minikube start -p download-only-005289"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-005289
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/json-events (4.53s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-098176 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-098176 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (4.534312335s)
--- PASS: TestDownloadOnly/v1.34.0/json-events (4.53s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/preload-exists
I0919 22:14:17.211801   18671 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
I0919 22:14:17.211848   18671 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21594-14764/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-098176
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-098176: exit status 85 (57.282371ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                ARGS                                                                                                 │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-005289 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ download-only-005289 │ jenkins │ v1.37.0 │ 19 Sep 25 22:14 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                               │ minikube             │ jenkins │ v1.37.0 │ 19 Sep 25 22:14 UTC │ 19 Sep 25 22:14 UTC │
	│ delete  │ -p download-only-005289                                                                                                                                                                             │ download-only-005289 │ jenkins │ v1.37.0 │ 19 Sep 25 22:14 UTC │ 19 Sep 25 22:14 UTC │
	│ start   │ -o=json --download-only -p download-only-098176 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ download-only-098176 │ jenkins │ v1.37.0 │ 19 Sep 25 22:14 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/19 22:14:12
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 22:14:12.719674   18890 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:14:12.719805   18890 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:14:12.719818   18890 out.go:374] Setting ErrFile to fd 2...
	I0919 22:14:12.719824   18890 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:14:12.720015   18890 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14764/.minikube/bin
	I0919 22:14:12.720475   18890 out.go:368] Setting JSON to true
	I0919 22:14:12.721369   18890 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3380,"bootTime":1758316673,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 22:14:12.721465   18890 start.go:140] virtualization: kvm guest
	I0919 22:14:12.723565   18890 out.go:99] [download-only-098176] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0919 22:14:12.723758   18890 notify.go:220] Checking for updates...
	I0919 22:14:12.725103   18890 out.go:171] MINIKUBE_LOCATION=21594
	I0919 22:14:12.726376   18890 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 22:14:12.727722   18890 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21594-14764/kubeconfig
	I0919 22:14:12.728913   18890 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14764/.minikube
	I0919 22:14:12.732227   18890 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-098176 host does not exist
	  To start a cluster, run: "minikube start -p download-only-098176"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-098176
--- PASS: TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.64s)

                                                
                                                
=== RUN   TestBinaryMirror
I0919 22:14:17.781187   18671 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-536834 --alsologtostderr --binary-mirror http://127.0.0.1:39355 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
helpers_test.go:175: Cleaning up "binary-mirror-536834" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-536834
--- PASS: TestBinaryMirror (0.64s)

                                                
                                    
x
+
TestOffline (108.04s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-914965 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-914965 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m47.164261579s)
helpers_test.go:175: Cleaning up "offline-crio-914965" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-914965
--- PASS: TestOffline (108.04s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-266998
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-266998: exit status 85 (49.787275ms)

                                                
                                                
-- stdout --
	* Profile "addons-266998" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-266998"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-266998
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-266998: exit status 85 (48.880071ms)

                                                
                                                
-- stdout --
	* Profile "addons-266998" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-266998"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (137.5s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-266998 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-266998 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m17.494911458s)
--- PASS: TestAddons/Setup (137.50s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-266998 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-266998 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.54s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-266998 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-266998 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [ad155804-0aa9-4ac7-b063-6258e2f3e249] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [ad155804-0aa9-4ac7-b063-6258e2f3e249] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.005012369s
addons_test.go:694: (dbg) Run:  kubectl --context addons-266998 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-266998 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-266998 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.54s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.09s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 9.048823ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-mqfsk" [53922a4c-8b51-430a-a161-b52ae6012395] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003460031s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-fbm84" [0543b709-064a-421e-8de0-6bcf044aa6d9] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005117536s
addons_test.go:392: (dbg) Run:  kubectl --context addons-266998 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-266998 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-266998 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.439566299s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-266998 ip
2025/09/19 22:17:07 [DEBUG] GET http://192.168.39.205:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-266998 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-266998 addons disable registry --alsologtostderr -v=1: (1.478347436s)
--- PASS: TestAddons/parallel/Registry (15.09s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (1.03s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 7.607718ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-266998
addons_test.go:332: (dbg) Run:  kubectl --context addons-266998 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-266998 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (1.03s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.41s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-wfgd6" [427fd9ea-6757-43dc-bfb1-34e3b4dc7417] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.007779944s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-266998 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (6.41s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.97s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 6.796032ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-w2zlg" [5e422a9c-fc42-44b6-b9a0-5b52446522be] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.011215129s
addons_test.go:463: (dbg) Run:  kubectl --context addons-266998 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-266998 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.97s)

                                                
                                    
x
+
TestAddons/parallel/CSI (45.73s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0919 22:17:00.474580   18671 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0919 22:17:00.499854   18671 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0919 22:17:00.499886   18671 kapi.go:107] duration metric: took 25.32393ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 25.33579ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-266998 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266998 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266998 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266998 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266998 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266998 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-266998 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [888de6fd-a80c-447d-8295-1c88ee8119fd] Pending
helpers_test.go:352: "task-pv-pod" [888de6fd-a80c-447d-8295-1c88ee8119fd] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [888de6fd-a80c-447d-8295-1c88ee8119fd] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.004641898s
addons_test.go:572: (dbg) Run:  kubectl --context addons-266998 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-266998 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-266998 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-266998 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-266998 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-266998 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266998 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266998 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266998 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266998 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266998 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266998 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266998 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-266998 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [8b0d9bf0-ca76-4847-bf93-eb6ed700fb85] Pending
helpers_test.go:352: "task-pv-pod-restore" [8b0d9bf0-ca76-4847-bf93-eb6ed700fb85] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [8b0d9bf0-ca76-4847-bf93-eb6ed700fb85] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004291404s
addons_test.go:614: (dbg) Run:  kubectl --context addons-266998 delete pod task-pv-pod-restore
addons_test.go:614: (dbg) Done: kubectl --context addons-266998 delete pod task-pv-pod-restore: (1.440763871s)
addons_test.go:618: (dbg) Run:  kubectl --context addons-266998 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-266998 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-266998 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-266998 addons disable volumesnapshots --alsologtostderr -v=1: (1.103567242s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-266998 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-266998 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.057592743s)
--- PASS: TestAddons/parallel/CSI (45.73s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (21.52s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-266998 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-85f8f8dc54-8f6cl" [ee9c7869-f52f-40d7-a530-e9ebbc865637] Pending
helpers_test.go:352: "headlamp-85f8f8dc54-8f6cl" [ee9c7869-f52f-40d7-a530-e9ebbc865637] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-85f8f8dc54-8f6cl" [ee9c7869-f52f-40d7-a530-e9ebbc865637] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.009164456s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-266998 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-266998 addons disable headlamp --alsologtostderr -v=1: (6.55970945s)
--- PASS: TestAddons/parallel/Headlamp (21.52s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.7s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-85f6b7fc65-nh8gm" [beb404b1-8052-4aa7-b9a9-e5dc7889562e] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.014055204s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-266998 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.70s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.85s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-266998 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-266998 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266998 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266998 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266998 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266998 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266998 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266998 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-266998 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [256c9731-bb99-43c3-ab64-baa97778b26a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [256c9731-bb99-43c3-ab64-baa97778b26a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [256c9731-bb99-43c3-ab64-baa97778b26a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.004582173s
addons_test.go:967: (dbg) Run:  kubectl --context addons-266998 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-266998 ssh "cat /opt/local-path-provisioner/pvc-9797f505-00b1-448b-b622-5acde1f9687f_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-266998 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-266998 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-266998 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-266998 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.996012525s)
--- PASS: TestAddons/parallel/LocalPath (53.85s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (7.04s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-g7n82" [fa453c43-6ba3-4d31-87ea-6a4bd054a758] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.008631331s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-266998 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-266998 addons disable nvidia-device-plugin --alsologtostderr -v=1: (1.029123381s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (7.04s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.54s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-kg2cn" [efce7a76-b6d3-416c-9636-8a3c1aff3de8] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.011486086s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-266998 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-266998 addons disable yakd --alsologtostderr -v=1: (6.527043009s)
--- PASS: TestAddons/parallel/Yakd (12.54s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (89.07s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-266998
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-266998: (1m28.804657298s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-266998
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-266998
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-266998
--- PASS: TestAddons/StoppedEnableDisable (89.07s)

                                                
                                    
x
+
TestCertOptions (73.36s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-746143 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E0919 23:26:36.656820   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/addons-266998/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-746143 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m11.824860875s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-746143 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-746143 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-746143 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-746143" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-746143
--- PASS: TestCertOptions (73.36s)

                                                
                                    
x
+
TestCertExpiration (271.21s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-265541 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-265541 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m9.542310154s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-265541 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-265541 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (20.750880961s)
helpers_test.go:175: Cleaning up "cert-expiration-265541" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-265541
--- PASS: TestCertExpiration (271.21s)

                                                
                                    
x
+
TestForceSystemdFlag (63.38s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-682543 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-682543 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m2.285695113s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-682543 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-682543" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-682543
--- PASS: TestForceSystemdFlag (63.38s)

                                                
                                    
x
+
TestForceSystemdEnv (46.03s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-470516 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-470516 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (45.164906012s)
helpers_test.go:175: Cleaning up "force-systemd-env-470516" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-470516
--- PASS: TestForceSystemdEnv (46.03s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.15s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0919 23:27:21.120176   18671 install.go:51] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0919 23:27:21.120405   18671 install.go:123] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate1086682206/001:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0919 23:27:21.165156   18671 install.go:134] /tmp/TestKVMDriverInstallOrUpdate1086682206/001/docker-machine-driver-kvm2 version is {Version:v1.1.1 Commit:40a1a986a50eac533e396012e35516d3d6c63f36-dirty}
W0919 23:27:21.165225   18671 install.go:61] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.37.0 or later
W0919 23:27:21.165390   18671 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0919 23:27:21.165480   18671 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1086682206/001/docker-machine-driver-kvm2
I0919 23:27:22.112382   18671 install.go:123] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate1086682206/001:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0919 23:27:22.136638   18671 install.go:134] /tmp/TestKVMDriverInstallOrUpdate1086682206/001/docker-machine-driver-kvm2 version is {Version:v1.37.0 Commit:1af8bdc072232de4b1fec3b6cc0e8337e118bc83}
--- PASS: TestKVMDriverInstallOrUpdate (1.15s)

                                                
                                    
x
+
TestErrorSpam/setup (40.43s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-513732 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-513732 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E0919 22:21:36.665587   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/addons-266998/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:21:36.672026   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/addons-266998/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:21:36.683458   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/addons-266998/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:21:36.704813   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/addons-266998/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:21:36.746250   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/addons-266998/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:21:36.827742   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/addons-266998/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:21:36.989332   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/addons-266998/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:21:37.311045   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/addons-266998/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:21:37.953098   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/addons-266998/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:21:39.234719   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/addons-266998/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:21:41.796853   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/addons-266998/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:21:46.918605   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/addons-266998/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:21:57.160696   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/addons-266998/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-513732 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-513732 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (40.431542942s)
--- PASS: TestErrorSpam/setup (40.43s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-513732 --log_dir /tmp/nospam-513732 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-513732 --log_dir /tmp/nospam-513732 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-513732 --log_dir /tmp/nospam-513732 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.8s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-513732 --log_dir /tmp/nospam-513732 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-513732 --log_dir /tmp/nospam-513732 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-513732 --log_dir /tmp/nospam-513732 status
--- PASS: TestErrorSpam/status (0.80s)

                                                
                                    
x
+
TestErrorSpam/pause (1.82s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-513732 --log_dir /tmp/nospam-513732 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-513732 --log_dir /tmp/nospam-513732 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-513732 --log_dir /tmp/nospam-513732 pause
--- PASS: TestErrorSpam/pause (1.82s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.99s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-513732 --log_dir /tmp/nospam-513732 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-513732 --log_dir /tmp/nospam-513732 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-513732 --log_dir /tmp/nospam-513732 unpause
--- PASS: TestErrorSpam/unpause (1.99s)

                                                
                                    
x
+
TestErrorSpam/stop (5.24s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-513732 --log_dir /tmp/nospam-513732 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-513732 --log_dir /tmp/nospam-513732 stop: (2.060938415s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-513732 --log_dir /tmp/nospam-513732 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-513732 --log_dir /tmp/nospam-513732 stop: (1.263822061s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-513732 --log_dir /tmp/nospam-513732 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-513732 --log_dir /tmp/nospam-513732 stop: (1.916382022s)
--- PASS: TestErrorSpam/stop (5.24s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21594-14764/.minikube/files/etc/test/nested/copy/18671/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (78.67s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-351278 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E0919 22:22:17.642131   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/addons-266998/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:22:58.604860   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/addons-266998/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-351278 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m18.674543024s)
--- PASS: TestFunctional/serial/StartWithProxy (78.67s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (28.91s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0919 22:23:31.408042   18671 config.go:182] Loaded profile config "functional-351278": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-351278 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-351278 --alsologtostderr -v=8: (28.906765133s)
functional_test.go:678: soft start took 28.907539316s for "functional-351278" cluster.
I0919 22:24:00.315170   18671 config.go:182] Loaded profile config "functional-351278": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/SoftStart (28.91s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-351278 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.51s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-351278 cache add registry.k8s.io/pause:3.1: (1.159171145s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-351278 cache add registry.k8s.io/pause:3.3: (1.163121131s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-351278 cache add registry.k8s.io/pause:latest: (1.19194728s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.51s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.53s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-351278 /tmp/TestFunctionalserialCacheCmdcacheadd_local1027662244/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 cache add minikube-local-cache-test:functional-351278
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-351278 cache add minikube-local-cache-test:functional-351278: (1.173953307s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 cache delete minikube-local-cache-test:functional-351278
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-351278
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.53s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.79s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-351278 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (215.667209ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-linux-amd64 -p functional-351278 cache reload: (1.096490921s)
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.79s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 kubectl -- --context functional-351278 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-351278 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (42.57s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-351278 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0919 22:24:20.529993   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/addons-266998/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-351278 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (42.570607855s)
functional_test.go:776: restart took 42.570748753s for "functional-351278" cluster.
I0919 22:24:50.486078   18671 config.go:182] Loaded profile config "functional-351278": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/ExtraConfig (42.57s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-351278 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.51s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-351278 logs: (1.509114463s)
--- PASS: TestFunctional/serial/LogsCmd (1.51s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.55s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 logs --file /tmp/TestFunctionalserialLogsFileCmd2997501248/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-351278 logs --file /tmp/TestFunctionalserialLogsFileCmd2997501248/001/logs.txt: (1.548535121s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.55s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.12s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-351278 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-351278
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-351278: exit status 115 (282.144846ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL             │
	├───────────┼─────────────┼─────────────┼────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.95:32172 │
	└───────────┴─────────────┴─────────────┴────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-351278 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.12s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-351278 config get cpus: exit status 14 (54.307227ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-351278 config get cpus: exit status 14 (47.55802ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-351278 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-351278 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 23 (135.975951ms)

                                                
                                                
-- stdout --
	* [functional-351278] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21594
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21594-14764/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14764/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 22:32:50.311626   28962 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:32:50.311774   28962 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:32:50.311784   28962 out.go:374] Setting ErrFile to fd 2...
	I0919 22:32:50.311789   28962 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:32:50.311994   28962 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14764/.minikube/bin
	I0919 22:32:50.312472   28962 out.go:368] Setting JSON to false
	I0919 22:32:50.313384   28962 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4497,"bootTime":1758316673,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 22:32:50.313468   28962 start.go:140] virtualization: kvm guest
	I0919 22:32:50.315688   28962 out.go:179] * [functional-351278] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0919 22:32:50.316940   28962 notify.go:220] Checking for updates...
	I0919 22:32:50.316983   28962 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 22:32:50.318564   28962 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 22:32:50.319870   28962 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-14764/kubeconfig
	I0919 22:32:50.321180   28962 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14764/.minikube
	I0919 22:32:50.322398   28962 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 22:32:50.323365   28962 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 22:32:50.324957   28962 config.go:182] Loaded profile config "functional-351278": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:32:50.325423   28962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 22:32:50.325510   28962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 22:32:50.339986   28962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39745
	I0919 22:32:50.340427   28962 main.go:141] libmachine: () Calling .GetVersion
	I0919 22:32:50.340919   28962 main.go:141] libmachine: Using API Version  1
	I0919 22:32:50.340945   28962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 22:32:50.341315   28962 main.go:141] libmachine: () Calling .GetMachineName
	I0919 22:32:50.341504   28962 main.go:141] libmachine: (functional-351278) Calling .DriverName
	I0919 22:32:50.341788   28962 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 22:32:50.342122   28962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 22:32:50.342165   28962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 22:32:50.357589   28962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43219
	I0919 22:32:50.358098   28962 main.go:141] libmachine: () Calling .GetVersion
	I0919 22:32:50.358591   28962 main.go:141] libmachine: Using API Version  1
	I0919 22:32:50.358620   28962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 22:32:50.358966   28962 main.go:141] libmachine: () Calling .GetMachineName
	I0919 22:32:50.359157   28962 main.go:141] libmachine: (functional-351278) Calling .DriverName
	I0919 22:32:50.397119   28962 out.go:179] * Using the kvm2 driver based on existing profile
	I0919 22:32:50.398435   28962 start.go:304] selected driver: kvm2
	I0919 22:32:50.398448   28962 start.go:918] validating driver "kvm2" against &{Name:functional-351278 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.37.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-351278 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.95 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:32:50.398554   28962 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 22:32:50.400763   28962 out.go:203] 
	W0919 22:32:50.402082   28962 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0919 22:32:50.403394   28962 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-351278 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
--- PASS: TestFunctional/parallel/DryRun (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-351278 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-351278 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 23 (136.603742ms)

                                                
                                                
-- stdout --
	* [functional-351278] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21594
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21594-14764/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14764/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 22:31:08.955961   27920 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:31:08.956050   27920 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:31:08.956054   27920 out.go:374] Setting ErrFile to fd 2...
	I0919 22:31:08.956058   27920 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:31:08.956374   27920 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14764/.minikube/bin
	I0919 22:31:08.956839   27920 out.go:368] Setting JSON to false
	I0919 22:31:08.957679   27920 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4396,"bootTime":1758316673,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 22:31:08.957782   27920 start.go:140] virtualization: kvm guest
	I0919 22:31:08.959988   27920 out.go:179] * [functional-351278] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I0919 22:31:08.961550   27920 notify.go:220] Checking for updates...
	I0919 22:31:08.961596   27920 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 22:31:08.962809   27920 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 22:31:08.964118   27920 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-14764/kubeconfig
	I0919 22:31:08.965424   27920 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14764/.minikube
	I0919 22:31:08.966664   27920 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 22:31:08.967960   27920 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 22:31:08.969766   27920 config.go:182] Loaded profile config "functional-351278": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:31:08.970140   27920 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 22:31:08.970201   27920 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 22:31:08.984228   27920 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36653
	I0919 22:31:08.984865   27920 main.go:141] libmachine: () Calling .GetVersion
	I0919 22:31:08.985412   27920 main.go:141] libmachine: Using API Version  1
	I0919 22:31:08.985437   27920 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 22:31:08.985812   27920 main.go:141] libmachine: () Calling .GetMachineName
	I0919 22:31:08.986117   27920 main.go:141] libmachine: (functional-351278) Calling .DriverName
	I0919 22:31:08.986437   27920 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 22:31:08.986941   27920 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 22:31:08.986987   27920 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 22:31:09.000934   27920 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43579
	I0919 22:31:09.001426   27920 main.go:141] libmachine: () Calling .GetVersion
	I0919 22:31:09.001964   27920 main.go:141] libmachine: Using API Version  1
	I0919 22:31:09.001986   27920 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 22:31:09.002332   27920 main.go:141] libmachine: () Calling .GetMachineName
	I0919 22:31:09.002575   27920 main.go:141] libmachine: (functional-351278) Calling .DriverName
	I0919 22:31:09.034390   27920 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I0919 22:31:09.035549   27920 start.go:304] selected driver: kvm2
	I0919 22:31:09.035566   27920 start.go:918] validating driver "kvm2" against &{Name:functional-351278 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.37.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-351278 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.95 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:31:09.035656   27920 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 22:31:09.037710   27920 out.go:203] 
	W0919 22:31:09.038948   27920 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0919 22:31:09.040312   27920 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 ssh -n functional-351278 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 cp functional-351278:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3792593431/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 ssh -n functional-351278 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 ssh -n functional-351278 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/18671/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 ssh "sudo cat /etc/test/nested/copy/18671/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/18671.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 ssh "sudo cat /etc/ssl/certs/18671.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/18671.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 ssh "sudo cat /usr/share/ca-certificates/18671.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/186712.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 ssh "sudo cat /etc/ssl/certs/186712.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/186712.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 ssh "sudo cat /usr/share/ca-certificates/186712.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-351278 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-351278 ssh "sudo systemctl is-active docker": exit status 1 (226.352643ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-351278 ssh "sudo systemctl is-active containerd": exit status 1 (224.547729ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-351278 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.0
registry.k8s.io/kube-proxy:v1.34.0
registry.k8s.io/kube-controller-manager:v1.34.0
registry.k8s.io/kube-apiserver:v1.34.0
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-351278
localhost/kicbase/echo-server:functional-351278
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-351278 image ls --format short --alsologtostderr:
I0919 22:35:02.089885   29882 out.go:360] Setting OutFile to fd 1 ...
I0919 22:35:02.090168   29882 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0919 22:35:02.090178   29882 out.go:374] Setting ErrFile to fd 2...
I0919 22:35:02.090183   29882 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0919 22:35:02.090386   29882 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14764/.minikube/bin
I0919 22:35:02.090985   29882 config.go:182] Loaded profile config "functional-351278": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0919 22:35:02.091107   29882 config.go:182] Loaded profile config "functional-351278": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0919 22:35:02.091669   29882 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0919 22:35:02.091720   29882 main.go:141] libmachine: Launching plugin server for driver kvm2
I0919 22:35:02.109715   29882 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45683
I0919 22:35:02.110327   29882 main.go:141] libmachine: () Calling .GetVersion
I0919 22:35:02.110998   29882 main.go:141] libmachine: Using API Version  1
I0919 22:35:02.111021   29882 main.go:141] libmachine: () Calling .SetConfigRaw
I0919 22:35:02.111441   29882 main.go:141] libmachine: () Calling .GetMachineName
I0919 22:35:02.111701   29882 main.go:141] libmachine: (functional-351278) Calling .GetState
I0919 22:35:02.113882   29882 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0919 22:35:02.113946   29882 main.go:141] libmachine: Launching plugin server for driver kvm2
I0919 22:35:02.128835   29882 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36977
I0919 22:35:02.129285   29882 main.go:141] libmachine: () Calling .GetVersion
I0919 22:35:02.129940   29882 main.go:141] libmachine: Using API Version  1
I0919 22:35:02.129988   29882 main.go:141] libmachine: () Calling .SetConfigRaw
I0919 22:35:02.130368   29882 main.go:141] libmachine: () Calling .GetMachineName
I0919 22:35:02.130599   29882 main.go:141] libmachine: (functional-351278) Calling .DriverName
I0919 22:35:02.130823   29882 ssh_runner.go:195] Run: systemctl --version
I0919 22:35:02.130856   29882 main.go:141] libmachine: (functional-351278) Calling .GetSSHHostname
I0919 22:35:02.134613   29882 main.go:141] libmachine: (functional-351278) DBG | domain functional-351278 has defined MAC address 52:54:00:10:7c:f3 in network mk-functional-351278
I0919 22:35:02.135123   29882 main.go:141] libmachine: (functional-351278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:7c:f3", ip: ""} in network mk-functional-351278: {Iface:virbr1 ExpiryTime:2025-09-19 23:22:28 +0000 UTC Type:0 Mac:52:54:00:10:7c:f3 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:functional-351278 Clientid:01:52:54:00:10:7c:f3}
I0919 22:35:02.135158   29882 main.go:141] libmachine: (functional-351278) DBG | domain functional-351278 has defined IP address 192.168.39.95 and MAC address 52:54:00:10:7c:f3 in network mk-functional-351278
I0919 22:35:02.135383   29882 main.go:141] libmachine: (functional-351278) Calling .GetSSHPort
I0919 22:35:02.135593   29882 main.go:141] libmachine: (functional-351278) Calling .GetSSHKeyPath
I0919 22:35:02.135757   29882 main.go:141] libmachine: (functional-351278) Calling .GetSSHUsername
I0919 22:35:02.135906   29882 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21594-14764/.minikube/machines/functional-351278/id_rsa Username:docker}
I0919 22:35:02.218697   29882 ssh_runner.go:195] Run: sudo crictl images --output json
I0919 22:35:02.262440   29882 main.go:141] libmachine: Making call to close driver server
I0919 22:35:02.262453   29882 main.go:141] libmachine: (functional-351278) Calling .Close
I0919 22:35:02.262766   29882 main.go:141] libmachine: Successfully made call to close driver server
I0919 22:35:02.262785   29882 main.go:141] libmachine: Making call to close connection to plugin binary
I0919 22:35:02.262835   29882 main.go:141] libmachine: Making call to close driver server
I0919 22:35:02.262848   29882 main.go:141] libmachine: (functional-351278) Calling .Close
I0919 22:35:02.263078   29882 main.go:141] libmachine: Successfully made call to close driver server
I0919 22:35:02.263095   29882 main.go:141] libmachine: Making call to close connection to plugin binary
I0919 22:35:02.263108   29882 main.go:141] libmachine: (functional-351278) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-351278 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ localhost/minikube-local-cache-test     │ functional-351278  │ 978d27c93a0e7 │ 3.33kB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.0            │ a0af72f2ec6d6 │ 76MB   │
│ registry.k8s.io/kube-scheduler          │ v1.34.0            │ 46169d968e920 │ 53.8MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ localhost/kicbase/echo-server           │ functional-351278  │ 9056ab77afb8e │ 4.94MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.0            │ 90550c43ad2bc │ 89.1MB │
│ registry.k8s.io/kube-proxy              │ v1.34.0            │ df0860106674d │ 73.1MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-351278 image ls --format table --alsologtostderr:
I0919 22:35:04.660129   30081 out.go:360] Setting OutFile to fd 1 ...
I0919 22:35:04.660503   30081 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0919 22:35:04.660525   30081 out.go:374] Setting ErrFile to fd 2...
I0919 22:35:04.660534   30081 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0919 22:35:04.660853   30081 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14764/.minikube/bin
I0919 22:35:04.661448   30081 config.go:182] Loaded profile config "functional-351278": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0919 22:35:04.661625   30081 config.go:182] Loaded profile config "functional-351278": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0919 22:35:04.662031   30081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0919 22:35:04.662079   30081 main.go:141] libmachine: Launching plugin server for driver kvm2
I0919 22:35:04.675856   30081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46111
I0919 22:35:04.676330   30081 main.go:141] libmachine: () Calling .GetVersion
I0919 22:35:04.676865   30081 main.go:141] libmachine: Using API Version  1
I0919 22:35:04.676884   30081 main.go:141] libmachine: () Calling .SetConfigRaw
I0919 22:35:04.677303   30081 main.go:141] libmachine: () Calling .GetMachineName
I0919 22:35:04.677516   30081 main.go:141] libmachine: (functional-351278) Calling .GetState
I0919 22:35:04.679778   30081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0919 22:35:04.679833   30081 main.go:141] libmachine: Launching plugin server for driver kvm2
I0919 22:35:04.693158   30081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35049
I0919 22:35:04.693685   30081 main.go:141] libmachine: () Calling .GetVersion
I0919 22:35:04.694221   30081 main.go:141] libmachine: Using API Version  1
I0919 22:35:04.694247   30081 main.go:141] libmachine: () Calling .SetConfigRaw
I0919 22:35:04.694606   30081 main.go:141] libmachine: () Calling .GetMachineName
I0919 22:35:04.694840   30081 main.go:141] libmachine: (functional-351278) Calling .DriverName
I0919 22:35:04.695030   30081 ssh_runner.go:195] Run: systemctl --version
I0919 22:35:04.695055   30081 main.go:141] libmachine: (functional-351278) Calling .GetSSHHostname
I0919 22:35:04.698217   30081 main.go:141] libmachine: (functional-351278) DBG | domain functional-351278 has defined MAC address 52:54:00:10:7c:f3 in network mk-functional-351278
I0919 22:35:04.698713   30081 main.go:141] libmachine: (functional-351278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:7c:f3", ip: ""} in network mk-functional-351278: {Iface:virbr1 ExpiryTime:2025-09-19 23:22:28 +0000 UTC Type:0 Mac:52:54:00:10:7c:f3 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:functional-351278 Clientid:01:52:54:00:10:7c:f3}
I0919 22:35:04.698762   30081 main.go:141] libmachine: (functional-351278) DBG | domain functional-351278 has defined IP address 192.168.39.95 and MAC address 52:54:00:10:7c:f3 in network mk-functional-351278
I0919 22:35:04.698990   30081 main.go:141] libmachine: (functional-351278) Calling .GetSSHPort
I0919 22:35:04.699203   30081 main.go:141] libmachine: (functional-351278) Calling .GetSSHKeyPath
I0919 22:35:04.699363   30081 main.go:141] libmachine: (functional-351278) Calling .GetSSHUsername
I0919 22:35:04.699551   30081 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21594-14764/.minikube/machines/functional-351278/id_rsa Username:docker}
I0919 22:35:04.801188   30081 ssh_runner.go:195] Run: sudo crictl images --output json
I0919 22:35:04.863466   30081 main.go:141] libmachine: Making call to close driver server
I0919 22:35:04.863490   30081 main.go:141] libmachine: (functional-351278) Calling .Close
I0919 22:35:04.863894   30081 main.go:141] libmachine: Successfully made call to close driver server
I0919 22:35:04.863917   30081 main.go:141] libmachine: Making call to close connection to plugin binary
I0919 22:35:04.863938   30081 main.go:141] libmachine: Making call to close driver server
I0919 22:35:04.863951   30081 main.go:141] libmachine: (functional-351278) Calling .Close
I0919 22:35:04.863915   30081 main.go:141] libmachine: (functional-351278) DBG | Closing plugin on server side
I0919 22:35:04.864202   30081 main.go:141] libmachine: Successfully made call to close driver server
I0919 22:35:04.864217   30081 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-351278 image ls --format json --alsologtostderr:
[{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-351278"],"size":"4943877"},{"id":"a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:82ea603ed3cce63f9f870f22299741e0011318391cf722dd924a1d5a9f8ce6f6","registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.0"],"size":"76004183"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registr
y.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc","repoDigests":["registry.k8s.io/kube-scheduler@sha256:31b77e40d737b6d3e3b19b4afd681c9362aef06353075895452fc9a41fe34140","registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.0"],"size":"53844823"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{
"id":"9d2f82b0961134d95f174e7d845f7418e8cb9fefd4b3c474c0bb9b980e9ef7c2","repoDigests":["docker.io/library/c905610ecb9af4a43341ca4a2cc891afda2460958864d7a91cd93a43c02277f2-tmp@sha256:3fe8a6749daeb3d71ea89c0d0a6b3abd7235169c1177d9fb35f8327a8c6c1d03"],"repoTags":[],"size":"1466017"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"6e38f40d
628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"978d27c93a0e7539e2e8713f23b45f3ecdb195882e83191eb476f130a17585e2","repoDigests":["localhost/minikube-local-cache-test@sha256:5f714e759c5b9836f666b59676d15960881c2c525f99c6ff2155610b8fa99eae"],"repoTags":["localhost/minikube-local-cache-test:functional-351278"],"size":"3328"},{"id":"90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90","repoDigests":["registry.k8s.io/kube-apiserver@sha256:495d3253a47a9a64a62041d518678c8b101fb628488e729d9f52ddff7cf82a86","registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.0"],"size":"8
9050097"},{"id":"df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce","repoDigests":["registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067","registry.k8s.io/kube-proxy@sha256:5f71731a5eadcf74f3997dfc159bf5ca36e48c3387c19082fc21871e0dbb19af"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.0"],"size":"73138071"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e8
03f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-351278 image ls --format json --alsologtostderr:
I0919 22:35:04.418960   30057 out.go:360] Setting OutFile to fd 1 ...
I0919 22:35:04.419212   30057 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0919 22:35:04.419222   30057 out.go:374] Setting ErrFile to fd 2...
I0919 22:35:04.419226   30057 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0919 22:35:04.419508   30057 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14764/.minikube/bin
I0919 22:35:04.420161   30057 config.go:182] Loaded profile config "functional-351278": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0919 22:35:04.420267   30057 config.go:182] Loaded profile config "functional-351278": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0919 22:35:04.420659   30057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0919 22:35:04.420720   30057 main.go:141] libmachine: Launching plugin server for driver kvm2
I0919 22:35:04.434645   30057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38387
I0919 22:35:04.435190   30057 main.go:141] libmachine: () Calling .GetVersion
I0919 22:35:04.435715   30057 main.go:141] libmachine: Using API Version  1
I0919 22:35:04.435756   30057 main.go:141] libmachine: () Calling .SetConfigRaw
I0919 22:35:04.436143   30057 main.go:141] libmachine: () Calling .GetMachineName
I0919 22:35:04.436375   30057 main.go:141] libmachine: (functional-351278) Calling .GetState
I0919 22:35:04.438529   30057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0919 22:35:04.438567   30057 main.go:141] libmachine: Launching plugin server for driver kvm2
I0919 22:35:04.453169   30057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43143
I0919 22:35:04.453664   30057 main.go:141] libmachine: () Calling .GetVersion
I0919 22:35:04.454167   30057 main.go:141] libmachine: Using API Version  1
I0919 22:35:04.454206   30057 main.go:141] libmachine: () Calling .SetConfigRaw
I0919 22:35:04.454533   30057 main.go:141] libmachine: () Calling .GetMachineName
I0919 22:35:04.454714   30057 main.go:141] libmachine: (functional-351278) Calling .DriverName
I0919 22:35:04.454928   30057 ssh_runner.go:195] Run: systemctl --version
I0919 22:35:04.454953   30057 main.go:141] libmachine: (functional-351278) Calling .GetSSHHostname
I0919 22:35:04.458366   30057 main.go:141] libmachine: (functional-351278) DBG | domain functional-351278 has defined MAC address 52:54:00:10:7c:f3 in network mk-functional-351278
I0919 22:35:04.458852   30057 main.go:141] libmachine: (functional-351278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:7c:f3", ip: ""} in network mk-functional-351278: {Iface:virbr1 ExpiryTime:2025-09-19 23:22:28 +0000 UTC Type:0 Mac:52:54:00:10:7c:f3 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:functional-351278 Clientid:01:52:54:00:10:7c:f3}
I0919 22:35:04.458876   30057 main.go:141] libmachine: (functional-351278) DBG | domain functional-351278 has defined IP address 192.168.39.95 and MAC address 52:54:00:10:7c:f3 in network mk-functional-351278
I0919 22:35:04.459087   30057 main.go:141] libmachine: (functional-351278) Calling .GetSSHPort
I0919 22:35:04.459250   30057 main.go:141] libmachine: (functional-351278) Calling .GetSSHKeyPath
I0919 22:35:04.459390   30057 main.go:141] libmachine: (functional-351278) Calling .GetSSHUsername
I0919 22:35:04.459531   30057 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21594-14764/.minikube/machines/functional-351278/id_rsa Username:docker}
I0919 22:35:04.544494   30057 ssh_runner.go:195] Run: sudo crictl images --output json
I0919 22:35:04.601131   30057 main.go:141] libmachine: Making call to close driver server
I0919 22:35:04.601150   30057 main.go:141] libmachine: (functional-351278) Calling .Close
I0919 22:35:04.601480   30057 main.go:141] libmachine: Successfully made call to close driver server
I0919 22:35:04.601497   30057 main.go:141] libmachine: Making call to close connection to plugin binary
I0919 22:35:04.601504   30057 main.go:141] libmachine: Making call to close driver server
I0919 22:35:04.601511   30057 main.go:141] libmachine: (functional-351278) Calling .Close
I0919 22:35:04.601518   30057 main.go:141] libmachine: (functional-351278) DBG | Closing plugin on server side
I0919 22:35:04.601829   30057 main.go:141] libmachine: (functional-351278) DBG | Closing plugin on server side
I0919 22:35:04.601884   30057 main.go:141] libmachine: Successfully made call to close driver server
I0919 22:35:04.601921   30057 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-351278 image ls --format yaml --alsologtostderr:
- id: 90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:495d3253a47a9a64a62041d518678c8b101fb628488e729d9f52ddff7cf82a86
- registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.0
size: "89050097"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce
repoDigests:
- registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067
- registry.k8s.io/kube-proxy@sha256:5f71731a5eadcf74f3997dfc159bf5ca36e48c3387c19082fc21871e0dbb19af
repoTags:
- registry.k8s.io/kube-proxy:v1.34.0
size: "73138071"
- id: 46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:31b77e40d737b6d3e3b19b4afd681c9362aef06353075895452fc9a41fe34140
- registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.0
size: "53844823"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-351278
size: "4943877"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:82ea603ed3cce63f9f870f22299741e0011318391cf722dd924a1d5a9f8ce6f6
- registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.0
size: "76004183"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 978d27c93a0e7539e2e8713f23b45f3ecdb195882e83191eb476f130a17585e2
repoDigests:
- localhost/minikube-local-cache-test@sha256:5f714e759c5b9836f666b59676d15960881c2c525f99c6ff2155610b8fa99eae
repoTags:
- localhost/minikube-local-cache-test:functional-351278
size: "3328"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-351278 image ls --format yaml --alsologtostderr:
I0919 22:35:02.315532   29930 out.go:360] Setting OutFile to fd 1 ...
I0919 22:35:02.315629   29930 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0919 22:35:02.315633   29930 out.go:374] Setting ErrFile to fd 2...
I0919 22:35:02.315637   29930 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0919 22:35:02.315884   29930 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14764/.minikube/bin
I0919 22:35:02.316460   29930 config.go:182] Loaded profile config "functional-351278": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0919 22:35:02.316547   29930 config.go:182] Loaded profile config "functional-351278": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0919 22:35:02.316960   29930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0919 22:35:02.317018   29930 main.go:141] libmachine: Launching plugin server for driver kvm2
I0919 22:35:02.331343   29930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38063
I0919 22:35:02.331910   29930 main.go:141] libmachine: () Calling .GetVersion
I0919 22:35:02.332361   29930 main.go:141] libmachine: Using API Version  1
I0919 22:35:02.332383   29930 main.go:141] libmachine: () Calling .SetConfigRaw
I0919 22:35:02.332818   29930 main.go:141] libmachine: () Calling .GetMachineName
I0919 22:35:02.333071   29930 main.go:141] libmachine: (functional-351278) Calling .GetState
I0919 22:35:02.335368   29930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0919 22:35:02.335418   29930 main.go:141] libmachine: Launching plugin server for driver kvm2
I0919 22:35:02.349854   29930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40819
I0919 22:35:02.350324   29930 main.go:141] libmachine: () Calling .GetVersion
I0919 22:35:02.350811   29930 main.go:141] libmachine: Using API Version  1
I0919 22:35:02.350842   29930 main.go:141] libmachine: () Calling .SetConfigRaw
I0919 22:35:02.351260   29930 main.go:141] libmachine: () Calling .GetMachineName
I0919 22:35:02.351460   29930 main.go:141] libmachine: (functional-351278) Calling .DriverName
I0919 22:35:02.351688   29930 ssh_runner.go:195] Run: systemctl --version
I0919 22:35:02.351741   29930 main.go:141] libmachine: (functional-351278) Calling .GetSSHHostname
I0919 22:35:02.354930   29930 main.go:141] libmachine: (functional-351278) DBG | domain functional-351278 has defined MAC address 52:54:00:10:7c:f3 in network mk-functional-351278
I0919 22:35:02.355312   29930 main.go:141] libmachine: (functional-351278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:7c:f3", ip: ""} in network mk-functional-351278: {Iface:virbr1 ExpiryTime:2025-09-19 23:22:28 +0000 UTC Type:0 Mac:52:54:00:10:7c:f3 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:functional-351278 Clientid:01:52:54:00:10:7c:f3}
I0919 22:35:02.355359   29930 main.go:141] libmachine: (functional-351278) DBG | domain functional-351278 has defined IP address 192.168.39.95 and MAC address 52:54:00:10:7c:f3 in network mk-functional-351278
I0919 22:35:02.355556   29930 main.go:141] libmachine: (functional-351278) Calling .GetSSHPort
I0919 22:35:02.355792   29930 main.go:141] libmachine: (functional-351278) Calling .GetSSHKeyPath
I0919 22:35:02.355964   29930 main.go:141] libmachine: (functional-351278) Calling .GetSSHUsername
I0919 22:35:02.356102   29930 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21594-14764/.minikube/machines/functional-351278/id_rsa Username:docker}
I0919 22:35:02.450178   29930 ssh_runner.go:195] Run: sudo crictl images --output json
I0919 22:35:02.500478   29930 main.go:141] libmachine: Making call to close driver server
I0919 22:35:02.500494   29930 main.go:141] libmachine: (functional-351278) Calling .Close
I0919 22:35:02.500796   29930 main.go:141] libmachine: Successfully made call to close driver server
I0919 22:35:02.500815   29930 main.go:141] libmachine: Making call to close connection to plugin binary
I0919 22:35:02.500831   29930 main.go:141] libmachine: Making call to close driver server
I0919 22:35:02.500840   29930 main.go:141] libmachine: (functional-351278) Calling .Close
I0919 22:35:02.500848   29930 main.go:141] libmachine: (functional-351278) DBG | Closing plugin on server side
I0919 22:35:02.501062   29930 main.go:141] libmachine: Successfully made call to close driver server
I0919 22:35:02.501074   29930 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-351278 ssh pgrep buildkitd: exit status 1 (221.288412ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 image build -t localhost/my-image:functional-351278 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-351278 image build -t localhost/my-image:functional-351278 testdata/build --alsologtostderr: (2.313858868s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-351278 image build -t localhost/my-image:functional-351278 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 9d2f82b0961
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-351278
--> f4c3bbcf634
Successfully tagged localhost/my-image:functional-351278
f4c3bbcf634ada9018d20e43027a3ab87da0606e979a17eaf0652c96fea6520a
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-351278 image build -t localhost/my-image:functional-351278 testdata/build --alsologtostderr:
I0919 22:35:02.778934   29983 out.go:360] Setting OutFile to fd 1 ...
I0919 22:35:02.779206   29983 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0919 22:35:02.779215   29983 out.go:374] Setting ErrFile to fd 2...
I0919 22:35:02.779220   29983 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0919 22:35:02.779481   29983 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14764/.minikube/bin
I0919 22:35:02.780206   29983 config.go:182] Loaded profile config "functional-351278": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0919 22:35:02.781094   29983 config.go:182] Loaded profile config "functional-351278": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0919 22:35:02.781653   29983 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0919 22:35:02.781705   29983 main.go:141] libmachine: Launching plugin server for driver kvm2
I0919 22:35:02.795614   29983 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35903
I0919 22:35:02.796129   29983 main.go:141] libmachine: () Calling .GetVersion
I0919 22:35:02.796609   29983 main.go:141] libmachine: Using API Version  1
I0919 22:35:02.796632   29983 main.go:141] libmachine: () Calling .SetConfigRaw
I0919 22:35:02.797082   29983 main.go:141] libmachine: () Calling .GetMachineName
I0919 22:35:02.797363   29983 main.go:141] libmachine: (functional-351278) Calling .GetState
I0919 22:35:02.799750   29983 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0919 22:35:02.799800   29983 main.go:141] libmachine: Launching plugin server for driver kvm2
I0919 22:35:02.813308   29983 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36005
I0919 22:35:02.813717   29983 main.go:141] libmachine: () Calling .GetVersion
I0919 22:35:02.814333   29983 main.go:141] libmachine: Using API Version  1
I0919 22:35:02.814369   29983 main.go:141] libmachine: () Calling .SetConfigRaw
I0919 22:35:02.814711   29983 main.go:141] libmachine: () Calling .GetMachineName
I0919 22:35:02.814932   29983 main.go:141] libmachine: (functional-351278) Calling .DriverName
I0919 22:35:02.815139   29983 ssh_runner.go:195] Run: systemctl --version
I0919 22:35:02.815161   29983 main.go:141] libmachine: (functional-351278) Calling .GetSSHHostname
I0919 22:35:02.818027   29983 main.go:141] libmachine: (functional-351278) DBG | domain functional-351278 has defined MAC address 52:54:00:10:7c:f3 in network mk-functional-351278
I0919 22:35:02.818488   29983 main.go:141] libmachine: (functional-351278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:7c:f3", ip: ""} in network mk-functional-351278: {Iface:virbr1 ExpiryTime:2025-09-19 23:22:28 +0000 UTC Type:0 Mac:52:54:00:10:7c:f3 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:functional-351278 Clientid:01:52:54:00:10:7c:f3}
I0919 22:35:02.818540   29983 main.go:141] libmachine: (functional-351278) DBG | domain functional-351278 has defined IP address 192.168.39.95 and MAC address 52:54:00:10:7c:f3 in network mk-functional-351278
I0919 22:35:02.818737   29983 main.go:141] libmachine: (functional-351278) Calling .GetSSHPort
I0919 22:35:02.818956   29983 main.go:141] libmachine: (functional-351278) Calling .GetSSHKeyPath
I0919 22:35:02.819126   29983 main.go:141] libmachine: (functional-351278) Calling .GetSSHUsername
I0919 22:35:02.819286   29983 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21594-14764/.minikube/machines/functional-351278/id_rsa Username:docker}
I0919 22:35:02.918208   29983 build_images.go:161] Building image from path: /tmp/build.3940066058.tar
I0919 22:35:02.918274   29983 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0919 22:35:02.934063   29983 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3940066058.tar
I0919 22:35:02.943044   29983 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3940066058.tar: stat -c "%s %y" /var/lib/minikube/build/build.3940066058.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3940066058.tar': No such file or directory
I0919 22:35:02.943080   29983 ssh_runner.go:362] scp /tmp/build.3940066058.tar --> /var/lib/minikube/build/build.3940066058.tar (3072 bytes)
I0919 22:35:03.003375   29983 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3940066058
I0919 22:35:03.021276   29983 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3940066058 -xf /var/lib/minikube/build/build.3940066058.tar
I0919 22:35:03.038026   29983 crio.go:315] Building image: /var/lib/minikube/build/build.3940066058
I0919 22:35:03.038084   29983 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-351278 /var/lib/minikube/build/build.3940066058 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0919 22:35:05.006654   29983 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-351278 /var/lib/minikube/build/build.3940066058 --cgroup-manager=cgroupfs: (1.968549893s)
I0919 22:35:05.006748   29983 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3940066058
I0919 22:35:05.023811   29983 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3940066058.tar
I0919 22:35:05.036894   29983 build_images.go:217] Built localhost/my-image:functional-351278 from /tmp/build.3940066058.tar
I0919 22:35:05.036934   29983 build_images.go:133] succeeded building to: functional-351278
I0919 22:35:05.036941   29983 build_images.go:134] failed building to: 
I0919 22:35:05.036971   29983 main.go:141] libmachine: Making call to close driver server
I0919 22:35:05.037003   29983 main.go:141] libmachine: (functional-351278) Calling .Close
I0919 22:35:05.037295   29983 main.go:141] libmachine: Successfully made call to close driver server
I0919 22:35:05.037313   29983 main.go:141] libmachine: Making call to close connection to plugin binary
I0919 22:35:05.037321   29983 main.go:141] libmachine: Making call to close driver server
I0919 22:35:05.037327   29983 main.go:141] libmachine: (functional-351278) Calling .Close
I0919 22:35:05.037594   29983 main.go:141] libmachine: (functional-351278) DBG | Closing plugin on server side
I0919 22:35:05.037615   29983 main.go:141] libmachine: Successfully made call to close driver server
I0919 22:35:05.037635   29983 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-351278
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 image load --daemon kicbase/echo-server:functional-351278 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-351278 image load --daemon kicbase/echo-server:functional-351278 --alsologtostderr: (1.309403441s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "302.161921ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "53.802181ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 image load --daemon kicbase/echo-server:functional-351278 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "290.49252ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "48.359852ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-351278
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 image load --daemon kicbase/echo-server:functional-351278 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 image save kicbase/echo-server:functional-351278 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 image rm kicbase/echo-server:functional-351278 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-351278
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 image save --daemon kicbase/echo-server:functional-351278 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-351278
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (97.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-351278 /tmp/TestFunctionalparallelMountCmdany-port2270664554/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1758321069045638215" to /tmp/TestFunctionalparallelMountCmdany-port2270664554/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1758321069045638215" to /tmp/TestFunctionalparallelMountCmdany-port2270664554/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1758321069045638215" to /tmp/TestFunctionalparallelMountCmdany-port2270664554/001/test-1758321069045638215
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-351278 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (199.778033ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0919 22:31:09.245788   18671 retry.go:31] will retry after 435.594166ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 19 22:31 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 19 22:31 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 19 22:31 test-1758321069045638215
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 ssh cat /mount-9p/test-1758321069045638215
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-351278 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [dc475075-c430-4387-b9b0-f728391024f1] Pending
helpers_test.go:352: "busybox-mount" [dc475075-c430-4387-b9b0-f728391024f1] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
E0919 22:31:36.656518   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/addons-266998/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox-mount" [dc475075-c430-4387-b9b0-f728391024f1] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [dc475075-c430-4387-b9b0-f728391024f1] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 1m35.003688145s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-351278 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-351278 /tmp/TestFunctionalparallelMountCmdany-port2270664554/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (97.31s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-351278 /tmp/TestFunctionalparallelMountCmdspecific-port3719278976/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-351278 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (196.525218ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0919 22:32:46.555924   18671 retry.go:31] will retry after 440.312501ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-351278 /tmp/TestFunctionalparallelMountCmdspecific-port3719278976/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-351278 ssh "sudo umount -f /mount-9p": exit status 1 (191.408384ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-351278 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-351278 /tmp/TestFunctionalparallelMountCmdspecific-port3719278976/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-351278 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2795973708/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-351278 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2795973708/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-351278 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2795973708/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-351278 ssh "findmnt -T" /mount1: exit status 1 (216.557156ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0919 22:32:48.192587   18671 retry.go:31] will retry after 674.816953ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-351278 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-351278 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2795973708/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-351278 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2795973708/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-351278 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2795973708/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-351278 service list: (1.236857917s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-351278 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-351278 service list -o json: (1.241031398s)
functional_test.go:1504: Took "1.241123797s" to run "out/minikube-linux-amd64 -p functional-351278 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.24s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-351278
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-351278
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-351278
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (256.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E0919 22:37:59.733707   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/addons-266998/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:39:58.388428   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/functional-351278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:39:58.394845   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/functional-351278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:39:58.406325   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/functional-351278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:39:58.427800   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/functional-351278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:39:58.469254   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/functional-351278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:39:58.550712   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/functional-351278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:39:58.712373   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/functional-351278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:39:59.033923   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/functional-351278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:39:59.676208   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/functional-351278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:40:00.957660   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/functional-351278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:40:03.520470   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/functional-351278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:40:08.642318   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/functional-351278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:40:18.884585   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/functional-351278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:40:39.365924   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/functional-351278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:41:20.327584   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/functional-351278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:41:36.656284   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/addons-266998/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-003898 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (4m15.698909486s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (256.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-003898 kubectl -- rollout status deployment/busybox: (3.182347278s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 kubectl -- exec busybox-7b57f96db7-cxlfg -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 kubectl -- exec busybox-7b57f96db7-hfr7b -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 kubectl -- exec busybox-7b57f96db7-rmg4b -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 kubectl -- exec busybox-7b57f96db7-cxlfg -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 kubectl -- exec busybox-7b57f96db7-hfr7b -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 kubectl -- exec busybox-7b57f96db7-rmg4b -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 kubectl -- exec busybox-7b57f96db7-cxlfg -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 kubectl -- exec busybox-7b57f96db7-hfr7b -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 kubectl -- exec busybox-7b57f96db7-rmg4b -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 kubectl -- exec busybox-7b57f96db7-cxlfg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 kubectl -- exec busybox-7b57f96db7-cxlfg -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 kubectl -- exec busybox-7b57f96db7-hfr7b -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 kubectl -- exec busybox-7b57f96db7-hfr7b -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 kubectl -- exec busybox-7b57f96db7-rmg4b -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 kubectl -- exec busybox-7b57f96db7-rmg4b -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (44.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 node add --alsologtostderr -v 5
E0919 22:42:42.249018   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/functional-351278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-003898 node add --alsologtostderr -v 5: (43.598578745s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (44.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-003898 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 cp testdata/cp-test.txt ha-003898:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 ssh -n ha-003898 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 cp ha-003898:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3903719268/001/cp-test_ha-003898.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 ssh -n ha-003898 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 cp ha-003898:/home/docker/cp-test.txt ha-003898-m02:/home/docker/cp-test_ha-003898_ha-003898-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 ssh -n ha-003898 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 ssh -n ha-003898-m02 "sudo cat /home/docker/cp-test_ha-003898_ha-003898-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 cp ha-003898:/home/docker/cp-test.txt ha-003898-m03:/home/docker/cp-test_ha-003898_ha-003898-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 ssh -n ha-003898 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 ssh -n ha-003898-m03 "sudo cat /home/docker/cp-test_ha-003898_ha-003898-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 cp ha-003898:/home/docker/cp-test.txt ha-003898-m04:/home/docker/cp-test_ha-003898_ha-003898-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 ssh -n ha-003898 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 ssh -n ha-003898-m04 "sudo cat /home/docker/cp-test_ha-003898_ha-003898-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 cp testdata/cp-test.txt ha-003898-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 ssh -n ha-003898-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 cp ha-003898-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3903719268/001/cp-test_ha-003898-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 ssh -n ha-003898-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 cp ha-003898-m02:/home/docker/cp-test.txt ha-003898:/home/docker/cp-test_ha-003898-m02_ha-003898.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 ssh -n ha-003898-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 ssh -n ha-003898 "sudo cat /home/docker/cp-test_ha-003898-m02_ha-003898.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 cp ha-003898-m02:/home/docker/cp-test.txt ha-003898-m03:/home/docker/cp-test_ha-003898-m02_ha-003898-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 ssh -n ha-003898-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 ssh -n ha-003898-m03 "sudo cat /home/docker/cp-test_ha-003898-m02_ha-003898-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 cp ha-003898-m02:/home/docker/cp-test.txt ha-003898-m04:/home/docker/cp-test_ha-003898-m02_ha-003898-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 ssh -n ha-003898-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 ssh -n ha-003898-m04 "sudo cat /home/docker/cp-test_ha-003898-m02_ha-003898-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 cp testdata/cp-test.txt ha-003898-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 ssh -n ha-003898-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 cp ha-003898-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3903719268/001/cp-test_ha-003898-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 ssh -n ha-003898-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 cp ha-003898-m03:/home/docker/cp-test.txt ha-003898:/home/docker/cp-test_ha-003898-m03_ha-003898.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 ssh -n ha-003898-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 ssh -n ha-003898 "sudo cat /home/docker/cp-test_ha-003898-m03_ha-003898.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 cp ha-003898-m03:/home/docker/cp-test.txt ha-003898-m02:/home/docker/cp-test_ha-003898-m03_ha-003898-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 ssh -n ha-003898-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 ssh -n ha-003898-m02 "sudo cat /home/docker/cp-test_ha-003898-m03_ha-003898-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 cp ha-003898-m03:/home/docker/cp-test.txt ha-003898-m04:/home/docker/cp-test_ha-003898-m03_ha-003898-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 ssh -n ha-003898-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 ssh -n ha-003898-m04 "sudo cat /home/docker/cp-test_ha-003898-m03_ha-003898-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 cp testdata/cp-test.txt ha-003898-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 ssh -n ha-003898-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 cp ha-003898-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3903719268/001/cp-test_ha-003898-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 ssh -n ha-003898-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 cp ha-003898-m04:/home/docker/cp-test.txt ha-003898:/home/docker/cp-test_ha-003898-m04_ha-003898.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 ssh -n ha-003898-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 ssh -n ha-003898 "sudo cat /home/docker/cp-test_ha-003898-m04_ha-003898.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 cp ha-003898-m04:/home/docker/cp-test.txt ha-003898-m02:/home/docker/cp-test_ha-003898-m04_ha-003898-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 ssh -n ha-003898-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 ssh -n ha-003898-m02 "sudo cat /home/docker/cp-test_ha-003898-m04_ha-003898-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 cp ha-003898-m04:/home/docker/cp-test.txt ha-003898-m03:/home/docker/cp-test_ha-003898-m04_ha-003898-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 ssh -n ha-003898-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 ssh -n ha-003898-m03 "sudo cat /home/docker/cp-test_ha-003898-m04_ha-003898-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (80.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-003898 node stop m02 --alsologtostderr -v 5: (1m20.120137426s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-003898 status --alsologtostderr -v 5: exit status 7 (682.200742ms)

                                                
                                                
-- stdout --
	ha-003898
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-003898-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-003898-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-003898-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 22:44:36.253423   35807 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:44:36.253702   35807 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:44:36.253714   35807 out.go:374] Setting ErrFile to fd 2...
	I0919 22:44:36.253721   35807 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:44:36.253980   35807 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14764/.minikube/bin
	I0919 22:44:36.254163   35807 out.go:368] Setting JSON to false
	I0919 22:44:36.254185   35807 mustload.go:65] Loading cluster: ha-003898
	I0919 22:44:36.254271   35807 notify.go:220] Checking for updates...
	I0919 22:44:36.254662   35807 config.go:182] Loaded profile config "ha-003898": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:44:36.254689   35807 status.go:174] checking status of ha-003898 ...
	I0919 22:44:36.255161   35807 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 22:44:36.255207   35807 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 22:44:36.269986   35807 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42741
	I0919 22:44:36.270443   35807 main.go:141] libmachine: () Calling .GetVersion
	I0919 22:44:36.270994   35807 main.go:141] libmachine: Using API Version  1
	I0919 22:44:36.271020   35807 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 22:44:36.271421   35807 main.go:141] libmachine: () Calling .GetMachineName
	I0919 22:44:36.271602   35807 main.go:141] libmachine: (ha-003898) Calling .GetState
	I0919 22:44:36.273216   35807 status.go:371] ha-003898 host status = "Running" (err=<nil>)
	I0919 22:44:36.273236   35807 host.go:66] Checking if "ha-003898" exists ...
	I0919 22:44:36.273573   35807 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 22:44:36.273641   35807 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 22:44:36.287238   35807 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36089
	I0919 22:44:36.287607   35807 main.go:141] libmachine: () Calling .GetVersion
	I0919 22:44:36.288083   35807 main.go:141] libmachine: Using API Version  1
	I0919 22:44:36.288102   35807 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 22:44:36.288424   35807 main.go:141] libmachine: () Calling .GetMachineName
	I0919 22:44:36.288606   35807 main.go:141] libmachine: (ha-003898) Calling .GetIP
	I0919 22:44:36.291308   35807 main.go:141] libmachine: (ha-003898) DBG | domain ha-003898 has defined MAC address 52:54:00:c1:07:11 in network mk-ha-003898
	I0919 22:44:36.291782   35807 main.go:141] libmachine: (ha-003898) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:07:11", ip: ""} in network mk-ha-003898: {Iface:virbr1 ExpiryTime:2025-09-19 23:38:10 +0000 UTC Type:0 Mac:52:54:00:c1:07:11 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-003898 Clientid:01:52:54:00:c1:07:11}
	I0919 22:44:36.291811   35807 main.go:141] libmachine: (ha-003898) DBG | domain ha-003898 has defined IP address 192.168.39.139 and MAC address 52:54:00:c1:07:11 in network mk-ha-003898
	I0919 22:44:36.291917   35807 host.go:66] Checking if "ha-003898" exists ...
	I0919 22:44:36.292184   35807 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 22:44:36.292226   35807 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 22:44:36.306273   35807 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43015
	I0919 22:44:36.306817   35807 main.go:141] libmachine: () Calling .GetVersion
	I0919 22:44:36.307308   35807 main.go:141] libmachine: Using API Version  1
	I0919 22:44:36.307333   35807 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 22:44:36.307674   35807 main.go:141] libmachine: () Calling .GetMachineName
	I0919 22:44:36.307868   35807 main.go:141] libmachine: (ha-003898) Calling .DriverName
	I0919 22:44:36.308086   35807 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:44:36.308110   35807 main.go:141] libmachine: (ha-003898) Calling .GetSSHHostname
	I0919 22:44:36.311096   35807 main.go:141] libmachine: (ha-003898) DBG | domain ha-003898 has defined MAC address 52:54:00:c1:07:11 in network mk-ha-003898
	I0919 22:44:36.311566   35807 main.go:141] libmachine: (ha-003898) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:07:11", ip: ""} in network mk-ha-003898: {Iface:virbr1 ExpiryTime:2025-09-19 23:38:10 +0000 UTC Type:0 Mac:52:54:00:c1:07:11 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:ha-003898 Clientid:01:52:54:00:c1:07:11}
	I0919 22:44:36.311595   35807 main.go:141] libmachine: (ha-003898) DBG | domain ha-003898 has defined IP address 192.168.39.139 and MAC address 52:54:00:c1:07:11 in network mk-ha-003898
	I0919 22:44:36.311770   35807 main.go:141] libmachine: (ha-003898) Calling .GetSSHPort
	I0919 22:44:36.311941   35807 main.go:141] libmachine: (ha-003898) Calling .GetSSHKeyPath
	I0919 22:44:36.312083   35807 main.go:141] libmachine: (ha-003898) Calling .GetSSHUsername
	I0919 22:44:36.312233   35807 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21594-14764/.minikube/machines/ha-003898/id_rsa Username:docker}
	I0919 22:44:36.401944   35807 ssh_runner.go:195] Run: systemctl --version
	I0919 22:44:36.411022   35807 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:44:36.431078   35807 kubeconfig.go:125] found "ha-003898" server: "https://192.168.39.254:8443"
	I0919 22:44:36.431110   35807 api_server.go:166] Checking apiserver status ...
	I0919 22:44:36.431142   35807 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:44:36.453168   35807 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1410/cgroup
	W0919 22:44:36.466482   35807 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1410/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:44:36.466557   35807 ssh_runner.go:195] Run: ls
	I0919 22:44:36.472708   35807 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0919 22:44:36.481597   35807 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0919 22:44:36.481622   35807 status.go:463] ha-003898 apiserver status = Running (err=<nil>)
	I0919 22:44:36.481632   35807 status.go:176] ha-003898 status: &{Name:ha-003898 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:44:36.481653   35807 status.go:174] checking status of ha-003898-m02 ...
	I0919 22:44:36.481956   35807 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 22:44:36.481989   35807 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 22:44:36.496590   35807 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32887
	I0919 22:44:36.497142   35807 main.go:141] libmachine: () Calling .GetVersion
	I0919 22:44:36.497681   35807 main.go:141] libmachine: Using API Version  1
	I0919 22:44:36.497700   35807 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 22:44:36.498065   35807 main.go:141] libmachine: () Calling .GetMachineName
	I0919 22:44:36.498266   35807 main.go:141] libmachine: (ha-003898-m02) Calling .GetState
	I0919 22:44:36.500036   35807 status.go:371] ha-003898-m02 host status = "Stopped" (err=<nil>)
	I0919 22:44:36.500049   35807 status.go:384] host is not running, skipping remaining checks
	I0919 22:44:36.500055   35807 status.go:176] ha-003898-m02 status: &{Name:ha-003898-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:44:36.500069   35807 status.go:174] checking status of ha-003898-m03 ...
	I0919 22:44:36.500390   35807 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 22:44:36.500473   35807 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 22:44:36.514210   35807 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41123
	I0919 22:44:36.514785   35807 main.go:141] libmachine: () Calling .GetVersion
	I0919 22:44:36.515261   35807 main.go:141] libmachine: Using API Version  1
	I0919 22:44:36.515282   35807 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 22:44:36.515646   35807 main.go:141] libmachine: () Calling .GetMachineName
	I0919 22:44:36.515860   35807 main.go:141] libmachine: (ha-003898-m03) Calling .GetState
	I0919 22:44:36.517630   35807 status.go:371] ha-003898-m03 host status = "Running" (err=<nil>)
	I0919 22:44:36.517649   35807 host.go:66] Checking if "ha-003898-m03" exists ...
	I0919 22:44:36.517970   35807 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 22:44:36.518012   35807 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 22:44:36.531795   35807 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37393
	I0919 22:44:36.532259   35807 main.go:141] libmachine: () Calling .GetVersion
	I0919 22:44:36.532674   35807 main.go:141] libmachine: Using API Version  1
	I0919 22:44:36.532695   35807 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 22:44:36.533077   35807 main.go:141] libmachine: () Calling .GetMachineName
	I0919 22:44:36.533276   35807 main.go:141] libmachine: (ha-003898-m03) Calling .GetIP
	I0919 22:44:36.536376   35807 main.go:141] libmachine: (ha-003898-m03) DBG | domain ha-003898-m03 has defined MAC address 52:54:00:cd:1e:26 in network mk-ha-003898
	I0919 22:44:36.536858   35807 main.go:141] libmachine: (ha-003898-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:1e:26", ip: ""} in network mk-ha-003898: {Iface:virbr1 ExpiryTime:2025-09-19 23:40:26 +0000 UTC Type:0 Mac:52:54:00:cd:1e:26 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:ha-003898-m03 Clientid:01:52:54:00:cd:1e:26}
	I0919 22:44:36.536892   35807 main.go:141] libmachine: (ha-003898-m03) DBG | domain ha-003898-m03 has defined IP address 192.168.39.213 and MAC address 52:54:00:cd:1e:26 in network mk-ha-003898
	I0919 22:44:36.537065   35807 host.go:66] Checking if "ha-003898-m03" exists ...
	I0919 22:44:36.537366   35807 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 22:44:36.537428   35807 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 22:44:36.551475   35807 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38463
	I0919 22:44:36.552074   35807 main.go:141] libmachine: () Calling .GetVersion
	I0919 22:44:36.552526   35807 main.go:141] libmachine: Using API Version  1
	I0919 22:44:36.552557   35807 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 22:44:36.552938   35807 main.go:141] libmachine: () Calling .GetMachineName
	I0919 22:44:36.553110   35807 main.go:141] libmachine: (ha-003898-m03) Calling .DriverName
	I0919 22:44:36.553309   35807 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:44:36.553339   35807 main.go:141] libmachine: (ha-003898-m03) Calling .GetSSHHostname
	I0919 22:44:36.556626   35807 main.go:141] libmachine: (ha-003898-m03) DBG | domain ha-003898-m03 has defined MAC address 52:54:00:cd:1e:26 in network mk-ha-003898
	I0919 22:44:36.557145   35807 main.go:141] libmachine: (ha-003898-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:1e:26", ip: ""} in network mk-ha-003898: {Iface:virbr1 ExpiryTime:2025-09-19 23:40:26 +0000 UTC Type:0 Mac:52:54:00:cd:1e:26 Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:ha-003898-m03 Clientid:01:52:54:00:cd:1e:26}
	I0919 22:44:36.557174   35807 main.go:141] libmachine: (ha-003898-m03) DBG | domain ha-003898-m03 has defined IP address 192.168.39.213 and MAC address 52:54:00:cd:1e:26 in network mk-ha-003898
	I0919 22:44:36.557319   35807 main.go:141] libmachine: (ha-003898-m03) Calling .GetSSHPort
	I0919 22:44:36.557490   35807 main.go:141] libmachine: (ha-003898-m03) Calling .GetSSHKeyPath
	I0919 22:44:36.557635   35807 main.go:141] libmachine: (ha-003898-m03) Calling .GetSSHUsername
	I0919 22:44:36.557817   35807 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21594-14764/.minikube/machines/ha-003898-m03/id_rsa Username:docker}
	I0919 22:44:36.647965   35807 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:44:36.670940   35807 kubeconfig.go:125] found "ha-003898" server: "https://192.168.39.254:8443"
	I0919 22:44:36.670965   35807 api_server.go:166] Checking apiserver status ...
	I0919 22:44:36.670998   35807 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:44:36.692561   35807 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1825/cgroup
	W0919 22:44:36.705052   35807 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1825/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:44:36.705115   35807 ssh_runner.go:195] Run: ls
	I0919 22:44:36.711717   35807 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0919 22:44:36.716761   35807 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0919 22:44:36.716793   35807 status.go:463] ha-003898-m03 apiserver status = Running (err=<nil>)
	I0919 22:44:36.716817   35807 status.go:176] ha-003898-m03 status: &{Name:ha-003898-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:44:36.716831   35807 status.go:174] checking status of ha-003898-m04 ...
	I0919 22:44:36.717153   35807 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 22:44:36.717198   35807 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 22:44:36.731369   35807 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33725
	I0919 22:44:36.731853   35807 main.go:141] libmachine: () Calling .GetVersion
	I0919 22:44:36.732343   35807 main.go:141] libmachine: Using API Version  1
	I0919 22:44:36.732368   35807 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 22:44:36.732689   35807 main.go:141] libmachine: () Calling .GetMachineName
	I0919 22:44:36.732906   35807 main.go:141] libmachine: (ha-003898-m04) Calling .GetState
	I0919 22:44:36.734719   35807 status.go:371] ha-003898-m04 host status = "Running" (err=<nil>)
	I0919 22:44:36.734748   35807 host.go:66] Checking if "ha-003898-m04" exists ...
	I0919 22:44:36.735025   35807 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 22:44:36.735067   35807 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 22:44:36.749717   35807 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41199
	I0919 22:44:36.750148   35807 main.go:141] libmachine: () Calling .GetVersion
	I0919 22:44:36.750604   35807 main.go:141] libmachine: Using API Version  1
	I0919 22:44:36.750626   35807 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 22:44:36.750991   35807 main.go:141] libmachine: () Calling .GetMachineName
	I0919 22:44:36.751184   35807 main.go:141] libmachine: (ha-003898-m04) Calling .GetIP
	I0919 22:44:36.754035   35807 main.go:141] libmachine: (ha-003898-m04) DBG | domain ha-003898-m04 has defined MAC address 52:54:00:e7:45:48 in network mk-ha-003898
	I0919 22:44:36.754516   35807 main.go:141] libmachine: (ha-003898-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:45:48", ip: ""} in network mk-ha-003898: {Iface:virbr1 ExpiryTime:2025-09-19 23:42:33 +0000 UTC Type:0 Mac:52:54:00:e7:45:48 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-003898-m04 Clientid:01:52:54:00:e7:45:48}
	I0919 22:44:36.754544   35807 main.go:141] libmachine: (ha-003898-m04) DBG | domain ha-003898-m04 has defined IP address 192.168.39.178 and MAC address 52:54:00:e7:45:48 in network mk-ha-003898
	I0919 22:44:36.754761   35807 host.go:66] Checking if "ha-003898-m04" exists ...
	I0919 22:44:36.755082   35807 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 22:44:36.755125   35807 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 22:44:36.769428   35807 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38745
	I0919 22:44:36.769949   35807 main.go:141] libmachine: () Calling .GetVersion
	I0919 22:44:36.770448   35807 main.go:141] libmachine: Using API Version  1
	I0919 22:44:36.770471   35807 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 22:44:36.770803   35807 main.go:141] libmachine: () Calling .GetMachineName
	I0919 22:44:36.771027   35807 main.go:141] libmachine: (ha-003898-m04) Calling .DriverName
	I0919 22:44:36.771241   35807 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:44:36.771267   35807 main.go:141] libmachine: (ha-003898-m04) Calling .GetSSHHostname
	I0919 22:44:36.774933   35807 main.go:141] libmachine: (ha-003898-m04) DBG | domain ha-003898-m04 has defined MAC address 52:54:00:e7:45:48 in network mk-ha-003898
	I0919 22:44:36.775445   35807 main.go:141] libmachine: (ha-003898-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:45:48", ip: ""} in network mk-ha-003898: {Iface:virbr1 ExpiryTime:2025-09-19 23:42:33 +0000 UTC Type:0 Mac:52:54:00:e7:45:48 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-003898-m04 Clientid:01:52:54:00:e7:45:48}
	I0919 22:44:36.775481   35807 main.go:141] libmachine: (ha-003898-m04) DBG | domain ha-003898-m04 has defined IP address 192.168.39.178 and MAC address 52:54:00:e7:45:48 in network mk-ha-003898
	I0919 22:44:36.775645   35807 main.go:141] libmachine: (ha-003898-m04) Calling .GetSSHPort
	I0919 22:44:36.775859   35807 main.go:141] libmachine: (ha-003898-m04) Calling .GetSSHKeyPath
	I0919 22:44:36.776016   35807 main.go:141] libmachine: (ha-003898-m04) Calling .GetSSHUsername
	I0919 22:44:36.776149   35807 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21594-14764/.minikube/machines/ha-003898-m04/id_rsa Username:docker}
	I0919 22:44:36.865424   35807 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:44:36.886629   35807 status.go:176] ha-003898-m04 status: &{Name:ha-003898-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (80.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (36.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 node start m02 --alsologtostderr -v 5
E0919 22:44:58.387691   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/functional-351278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-003898 node start m02 --alsologtostderr -v 5: (35.696704758s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-003898 status --alsologtostderr -v 5: (1.013553104s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (36.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (1.21340394s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (386.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 stop --alsologtostderr -v 5
E0919 22:45:26.091649   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/functional-351278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:46:36.662950   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/addons-266998/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-003898 stop --alsologtostderr -v 5: (4m15.894733593s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 start --wait true --alsologtostderr -v 5
E0919 22:49:58.387478   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/functional-351278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:51:36.656186   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/addons-266998/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-003898 start --wait true --alsologtostderr -v 5: (2m10.752133198s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (386.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-003898 node delete m03 --alsologtostderr -v 5: (17.670333859s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (254.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 stop --alsologtostderr -v 5
E0919 22:54:39.735638   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/addons-266998/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:54:58.388404   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/functional-351278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-003898 stop --alsologtostderr -v 5: (4m14.500885683s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-003898 status --alsologtostderr -v 5: exit status 7 (104.014219ms)

                                                
                                                
-- stdout --
	ha-003898
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-003898-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-003898-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 22:56:16.069443   39826 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:56:16.069764   39826 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:56:16.069774   39826 out.go:374] Setting ErrFile to fd 2...
	I0919 22:56:16.069779   39826 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:56:16.069978   39826 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14764/.minikube/bin
	I0919 22:56:16.070154   39826 out.go:368] Setting JSON to false
	I0919 22:56:16.070174   39826 mustload.go:65] Loading cluster: ha-003898
	I0919 22:56:16.070247   39826 notify.go:220] Checking for updates...
	I0919 22:56:16.070513   39826 config.go:182] Loaded profile config "ha-003898": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:56:16.070533   39826 status.go:174] checking status of ha-003898 ...
	I0919 22:56:16.070963   39826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 22:56:16.071002   39826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 22:56:16.085059   39826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42995
	I0919 22:56:16.085546   39826 main.go:141] libmachine: () Calling .GetVersion
	I0919 22:56:16.086077   39826 main.go:141] libmachine: Using API Version  1
	I0919 22:56:16.086100   39826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 22:56:16.086477   39826 main.go:141] libmachine: () Calling .GetMachineName
	I0919 22:56:16.086699   39826 main.go:141] libmachine: (ha-003898) Calling .GetState
	I0919 22:56:16.088512   39826 status.go:371] ha-003898 host status = "Stopped" (err=<nil>)
	I0919 22:56:16.088527   39826 status.go:384] host is not running, skipping remaining checks
	I0919 22:56:16.088532   39826 status.go:176] ha-003898 status: &{Name:ha-003898 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:56:16.088555   39826 status.go:174] checking status of ha-003898-m02 ...
	I0919 22:56:16.088885   39826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 22:56:16.088924   39826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 22:56:16.102502   39826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45981
	I0919 22:56:16.103060   39826 main.go:141] libmachine: () Calling .GetVersion
	I0919 22:56:16.103574   39826 main.go:141] libmachine: Using API Version  1
	I0919 22:56:16.103597   39826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 22:56:16.103991   39826 main.go:141] libmachine: () Calling .GetMachineName
	I0919 22:56:16.104188   39826 main.go:141] libmachine: (ha-003898-m02) Calling .GetState
	I0919 22:56:16.105802   39826 status.go:371] ha-003898-m02 host status = "Stopped" (err=<nil>)
	I0919 22:56:16.105817   39826 status.go:384] host is not running, skipping remaining checks
	I0919 22:56:16.105824   39826 status.go:176] ha-003898-m02 status: &{Name:ha-003898-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:56:16.105863   39826 status.go:174] checking status of ha-003898-m04 ...
	I0919 22:56:16.106150   39826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 22:56:16.106197   39826 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 22:56:16.120397   39826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41091
	I0919 22:56:16.120867   39826 main.go:141] libmachine: () Calling .GetVersion
	I0919 22:56:16.121315   39826 main.go:141] libmachine: Using API Version  1
	I0919 22:56:16.121337   39826 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 22:56:16.121783   39826 main.go:141] libmachine: () Calling .GetMachineName
	I0919 22:56:16.122011   39826 main.go:141] libmachine: (ha-003898-m04) Calling .GetState
	I0919 22:56:16.123852   39826 status.go:371] ha-003898-m04 host status = "Stopped" (err=<nil>)
	I0919 22:56:16.123866   39826 status.go:384] host is not running, skipping remaining checks
	I0919 22:56:16.123872   39826 status.go:176] ha-003898-m04 status: &{Name:ha-003898-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (254.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (110.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E0919 22:56:21.454937   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/functional-351278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:56:36.655867   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/addons-266998/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-003898 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m49.226610617s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (110.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (103.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-003898 node add --control-plane --alsologtostderr -v 5: (1m42.600179349s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-003898 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (103.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.95s)

                                                
                                    
x
+
TestJSONOutput/start/Command (85.38s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-739050 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E0919 22:59:58.389093   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/functional-351278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-739050 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m25.381320746s)
--- PASS: TestJSONOutput/start/Command (85.38s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.86s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-739050 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.86s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.73s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-739050 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.73s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.85s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-739050 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-739050 --output=json --user=testUser: (7.850422951s)
--- PASS: TestJSONOutput/stop/Command (7.85s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-521277 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-521277 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (62.994229ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3ea139d4-35bb-40e2-beb9-8e6301eba177","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-521277] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"2c94235a-051e-4ab5-9138-fd489c182f0e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21594"}}
	{"specversion":"1.0","id":"65b14ccb-7a00-4f08-b077-fe9e223f0ba1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"bb5aa0b8-317b-4f5b-8516-bb972b302bd1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21594-14764/kubeconfig"}}
	{"specversion":"1.0","id":"e2ead611-c782-449f-aa89-ad0beb5b4e59","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14764/.minikube"}}
	{"specversion":"1.0","id":"23d970dc-4b7d-43f3-8694-0fd6bd322b5e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"5b94e1aa-5f13-47d7-b758-69fc5e389df6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e3bd56ae-d047-45e4-accb-8e86c62e32dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-521277" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-521277
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (88.93s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-959700 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E0919 23:01:36.661533   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/addons-266998/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-959700 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (43.691190521s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-971175 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-971175 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (42.372287208s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-959700
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-971175
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-971175" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-971175
helpers_test.go:175: Cleaning up "first-959700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-959700
--- PASS: TestMinikubeProfile (88.93s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (25.54s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-385310 --memory=3072 --mount-string /tmp/TestMountStartserial4225092511/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-385310 --memory=3072 --mount-string /tmp/TestMountStartserial4225092511/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (24.536656575s)
--- PASS: TestMountStart/serial/StartWithMountFirst (25.54s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-385310 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-385310 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (21.99s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-400734 --memory=3072 --mount-string /tmp/TestMountStartserial4225092511/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-400734 --memory=3072 --mount-string /tmp/TestMountStartserial4225092511/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (20.99458335s)
--- PASS: TestMountStart/serial/StartWithMountSecond (21.99s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-400734 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-400734 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.74s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-385310 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.74s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-400734 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-400734 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.39s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-400734
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-400734: (1.393091636s)
--- PASS: TestMountStart/serial/Stop (1.39s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (20.14s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-400734
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-400734: (19.135118847s)
--- PASS: TestMountStart/serial/RestartStopped (20.14s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-400734 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-400734 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (134.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-337202 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E0919 23:04:58.388228   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/functional-351278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-337202 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (2m14.397975047s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337202 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (134.85s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-337202 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-337202 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-337202 -- rollout status deployment/busybox: (2.537292638s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-337202 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-337202 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-337202 -- exec busybox-7b57f96db7-drrwr -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-337202 -- exec busybox-7b57f96db7-j77hj -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-337202 -- exec busybox-7b57f96db7-drrwr -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-337202 -- exec busybox-7b57f96db7-j77hj -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-337202 -- exec busybox-7b57f96db7-drrwr -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-337202 -- exec busybox-7b57f96db7-j77hj -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.11s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-337202 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-337202 -- exec busybox-7b57f96db7-drrwr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-337202 -- exec busybox-7b57f96db7-drrwr -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-337202 -- exec busybox-7b57f96db7-j77hj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-337202 -- exec busybox-7b57f96db7-j77hj -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.81s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (44.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-337202 -v=5 --alsologtostderr
E0919 23:06:36.655764   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/addons-266998/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-337202 -v=5 --alsologtostderr: (44.000209834s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337202 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (44.61s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-337202 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.62s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337202 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337202 cp testdata/cp-test.txt multinode-337202:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337202 ssh -n multinode-337202 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337202 cp multinode-337202:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1425896587/001/cp-test_multinode-337202.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337202 ssh -n multinode-337202 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337202 cp multinode-337202:/home/docker/cp-test.txt multinode-337202-m02:/home/docker/cp-test_multinode-337202_multinode-337202-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337202 ssh -n multinode-337202 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337202 ssh -n multinode-337202-m02 "sudo cat /home/docker/cp-test_multinode-337202_multinode-337202-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337202 cp multinode-337202:/home/docker/cp-test.txt multinode-337202-m03:/home/docker/cp-test_multinode-337202_multinode-337202-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337202 ssh -n multinode-337202 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337202 ssh -n multinode-337202-m03 "sudo cat /home/docker/cp-test_multinode-337202_multinode-337202-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337202 cp testdata/cp-test.txt multinode-337202-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337202 ssh -n multinode-337202-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337202 cp multinode-337202-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1425896587/001/cp-test_multinode-337202-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337202 ssh -n multinode-337202-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337202 cp multinode-337202-m02:/home/docker/cp-test.txt multinode-337202:/home/docker/cp-test_multinode-337202-m02_multinode-337202.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337202 ssh -n multinode-337202-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337202 ssh -n multinode-337202 "sudo cat /home/docker/cp-test_multinode-337202-m02_multinode-337202.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337202 cp multinode-337202-m02:/home/docker/cp-test.txt multinode-337202-m03:/home/docker/cp-test_multinode-337202-m02_multinode-337202-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337202 ssh -n multinode-337202-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337202 ssh -n multinode-337202-m03 "sudo cat /home/docker/cp-test_multinode-337202-m02_multinode-337202-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337202 cp testdata/cp-test.txt multinode-337202-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337202 ssh -n multinode-337202-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337202 cp multinode-337202-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1425896587/001/cp-test_multinode-337202-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337202 ssh -n multinode-337202-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337202 cp multinode-337202-m03:/home/docker/cp-test.txt multinode-337202:/home/docker/cp-test_multinode-337202-m03_multinode-337202.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337202 ssh -n multinode-337202-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337202 ssh -n multinode-337202 "sudo cat /home/docker/cp-test_multinode-337202-m03_multinode-337202.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337202 cp multinode-337202-m03:/home/docker/cp-test.txt multinode-337202-m02:/home/docker/cp-test_multinode-337202-m03_multinode-337202-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337202 ssh -n multinode-337202-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337202 ssh -n multinode-337202-m02 "sudo cat /home/docker/cp-test_multinode-337202-m03_multinode-337202-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.43s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337202 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-337202 node stop m03: (1.841289807s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337202 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-337202 status: exit status 7 (445.560192ms)

                                                
                                                
-- stdout --
	multinode-337202
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-337202-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-337202-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337202 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-337202 status --alsologtostderr: exit status 7 (449.799328ms)

                                                
                                                
-- stdout --
	multinode-337202
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-337202-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-337202-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 23:07:26.301814   47753 out.go:360] Setting OutFile to fd 1 ...
	I0919 23:07:26.301940   47753 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 23:07:26.301953   47753 out.go:374] Setting ErrFile to fd 2...
	I0919 23:07:26.301957   47753 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 23:07:26.302166   47753 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14764/.minikube/bin
	I0919 23:07:26.302390   47753 out.go:368] Setting JSON to false
	I0919 23:07:26.302411   47753 mustload.go:65] Loading cluster: multinode-337202
	I0919 23:07:26.302460   47753 notify.go:220] Checking for updates...
	I0919 23:07:26.302878   47753 config.go:182] Loaded profile config "multinode-337202": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 23:07:26.302900   47753 status.go:174] checking status of multinode-337202 ...
	I0919 23:07:26.303365   47753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 23:07:26.303423   47753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 23:07:26.317532   47753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33701
	I0919 23:07:26.318005   47753 main.go:141] libmachine: () Calling .GetVersion
	I0919 23:07:26.318482   47753 main.go:141] libmachine: Using API Version  1
	I0919 23:07:26.318520   47753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 23:07:26.318850   47753 main.go:141] libmachine: () Calling .GetMachineName
	I0919 23:07:26.319037   47753 main.go:141] libmachine: (multinode-337202) Calling .GetState
	I0919 23:07:26.321159   47753 status.go:371] multinode-337202 host status = "Running" (err=<nil>)
	I0919 23:07:26.321193   47753 host.go:66] Checking if "multinode-337202" exists ...
	I0919 23:07:26.321642   47753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 23:07:26.321700   47753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 23:07:26.335932   47753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44105
	I0919 23:07:26.336461   47753 main.go:141] libmachine: () Calling .GetVersion
	I0919 23:07:26.336931   47753 main.go:141] libmachine: Using API Version  1
	I0919 23:07:26.336952   47753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 23:07:26.337316   47753 main.go:141] libmachine: () Calling .GetMachineName
	I0919 23:07:26.337604   47753 main.go:141] libmachine: (multinode-337202) Calling .GetIP
	I0919 23:07:26.341231   47753 main.go:141] libmachine: (multinode-337202) DBG | domain multinode-337202 has defined MAC address 52:54:00:2e:48:c4 in network mk-multinode-337202
	I0919 23:07:26.341708   47753 main.go:141] libmachine: (multinode-337202) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:48:c4", ip: ""} in network mk-multinode-337202: {Iface:virbr1 ExpiryTime:2025-09-20 00:04:27 +0000 UTC Type:0 Mac:52:54:00:2e:48:c4 Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:multinode-337202 Clientid:01:52:54:00:2e:48:c4}
	I0919 23:07:26.341751   47753 main.go:141] libmachine: (multinode-337202) DBG | domain multinode-337202 has defined IP address 192.168.39.98 and MAC address 52:54:00:2e:48:c4 in network mk-multinode-337202
	I0919 23:07:26.341981   47753 host.go:66] Checking if "multinode-337202" exists ...
	I0919 23:07:26.342279   47753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 23:07:26.342324   47753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 23:07:26.356335   47753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34835
	I0919 23:07:26.356924   47753 main.go:141] libmachine: () Calling .GetVersion
	I0919 23:07:26.357456   47753 main.go:141] libmachine: Using API Version  1
	I0919 23:07:26.357477   47753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 23:07:26.357841   47753 main.go:141] libmachine: () Calling .GetMachineName
	I0919 23:07:26.358039   47753 main.go:141] libmachine: (multinode-337202) Calling .DriverName
	I0919 23:07:26.358206   47753 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 23:07:26.358241   47753 main.go:141] libmachine: (multinode-337202) Calling .GetSSHHostname
	I0919 23:07:26.361923   47753 main.go:141] libmachine: (multinode-337202) DBG | domain multinode-337202 has defined MAC address 52:54:00:2e:48:c4 in network mk-multinode-337202
	I0919 23:07:26.362438   47753 main.go:141] libmachine: (multinode-337202) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:48:c4", ip: ""} in network mk-multinode-337202: {Iface:virbr1 ExpiryTime:2025-09-20 00:04:27 +0000 UTC Type:0 Mac:52:54:00:2e:48:c4 Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:multinode-337202 Clientid:01:52:54:00:2e:48:c4}
	I0919 23:07:26.362470   47753 main.go:141] libmachine: (multinode-337202) DBG | domain multinode-337202 has defined IP address 192.168.39.98 and MAC address 52:54:00:2e:48:c4 in network mk-multinode-337202
	I0919 23:07:26.362722   47753 main.go:141] libmachine: (multinode-337202) Calling .GetSSHPort
	I0919 23:07:26.362910   47753 main.go:141] libmachine: (multinode-337202) Calling .GetSSHKeyPath
	I0919 23:07:26.363069   47753 main.go:141] libmachine: (multinode-337202) Calling .GetSSHUsername
	I0919 23:07:26.363206   47753 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21594-14764/.minikube/machines/multinode-337202/id_rsa Username:docker}
	I0919 23:07:26.451840   47753 ssh_runner.go:195] Run: systemctl --version
	I0919 23:07:26.459217   47753 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 23:07:26.478875   47753 kubeconfig.go:125] found "multinode-337202" server: "https://192.168.39.98:8443"
	I0919 23:07:26.478910   47753 api_server.go:166] Checking apiserver status ...
	I0919 23:07:26.478962   47753 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 23:07:26.502871   47753 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1378/cgroup
	W0919 23:07:26.516233   47753 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1378/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 23:07:26.516285   47753 ssh_runner.go:195] Run: ls
	I0919 23:07:26.522305   47753 api_server.go:253] Checking apiserver healthz at https://192.168.39.98:8443/healthz ...
	I0919 23:07:26.527162   47753 api_server.go:279] https://192.168.39.98:8443/healthz returned 200:
	ok
	I0919 23:07:26.527185   47753 status.go:463] multinode-337202 apiserver status = Running (err=<nil>)
	I0919 23:07:26.527195   47753 status.go:176] multinode-337202 status: &{Name:multinode-337202 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 23:07:26.527210   47753 status.go:174] checking status of multinode-337202-m02 ...
	I0919 23:07:26.527525   47753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 23:07:26.527566   47753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 23:07:26.541869   47753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39731
	I0919 23:07:26.542373   47753 main.go:141] libmachine: () Calling .GetVersion
	I0919 23:07:26.542830   47753 main.go:141] libmachine: Using API Version  1
	I0919 23:07:26.542849   47753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 23:07:26.543192   47753 main.go:141] libmachine: () Calling .GetMachineName
	I0919 23:07:26.543447   47753 main.go:141] libmachine: (multinode-337202-m02) Calling .GetState
	I0919 23:07:26.545349   47753 status.go:371] multinode-337202-m02 host status = "Running" (err=<nil>)
	I0919 23:07:26.545366   47753 host.go:66] Checking if "multinode-337202-m02" exists ...
	I0919 23:07:26.545698   47753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 23:07:26.545764   47753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 23:07:26.560424   47753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44741
	I0919 23:07:26.561018   47753 main.go:141] libmachine: () Calling .GetVersion
	I0919 23:07:26.561552   47753 main.go:141] libmachine: Using API Version  1
	I0919 23:07:26.561573   47753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 23:07:26.561927   47753 main.go:141] libmachine: () Calling .GetMachineName
	I0919 23:07:26.562128   47753 main.go:141] libmachine: (multinode-337202-m02) Calling .GetIP
	I0919 23:07:26.565320   47753 main.go:141] libmachine: (multinode-337202-m02) DBG | domain multinode-337202-m02 has defined MAC address 52:54:00:41:c6:37 in network mk-multinode-337202
	I0919 23:07:26.565871   47753 main.go:141] libmachine: (multinode-337202-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:c6:37", ip: ""} in network mk-multinode-337202: {Iface:virbr1 ExpiryTime:2025-09-20 00:05:57 +0000 UTC Type:0 Mac:52:54:00:41:c6:37 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-337202-m02 Clientid:01:52:54:00:41:c6:37}
	I0919 23:07:26.565894   47753 main.go:141] libmachine: (multinode-337202-m02) DBG | domain multinode-337202-m02 has defined IP address 192.168.39.146 and MAC address 52:54:00:41:c6:37 in network mk-multinode-337202
	I0919 23:07:26.566109   47753 host.go:66] Checking if "multinode-337202-m02" exists ...
	I0919 23:07:26.566444   47753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 23:07:26.566555   47753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 23:07:26.580969   47753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38535
	I0919 23:07:26.581410   47753 main.go:141] libmachine: () Calling .GetVersion
	I0919 23:07:26.581893   47753 main.go:141] libmachine: Using API Version  1
	I0919 23:07:26.581952   47753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 23:07:26.582297   47753 main.go:141] libmachine: () Calling .GetMachineName
	I0919 23:07:26.582496   47753 main.go:141] libmachine: (multinode-337202-m02) Calling .DriverName
	I0919 23:07:26.582688   47753 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 23:07:26.582712   47753 main.go:141] libmachine: (multinode-337202-m02) Calling .GetSSHHostname
	I0919 23:07:26.585836   47753 main.go:141] libmachine: (multinode-337202-m02) DBG | domain multinode-337202-m02 has defined MAC address 52:54:00:41:c6:37 in network mk-multinode-337202
	I0919 23:07:26.586277   47753 main.go:141] libmachine: (multinode-337202-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:c6:37", ip: ""} in network mk-multinode-337202: {Iface:virbr1 ExpiryTime:2025-09-20 00:05:57 +0000 UTC Type:0 Mac:52:54:00:41:c6:37 Iaid: IPaddr:192.168.39.146 Prefix:24 Hostname:multinode-337202-m02 Clientid:01:52:54:00:41:c6:37}
	I0919 23:07:26.586302   47753 main.go:141] libmachine: (multinode-337202-m02) DBG | domain multinode-337202-m02 has defined IP address 192.168.39.146 and MAC address 52:54:00:41:c6:37 in network mk-multinode-337202
	I0919 23:07:26.586532   47753 main.go:141] libmachine: (multinode-337202-m02) Calling .GetSSHPort
	I0919 23:07:26.586695   47753 main.go:141] libmachine: (multinode-337202-m02) Calling .GetSSHKeyPath
	I0919 23:07:26.586864   47753 main.go:141] libmachine: (multinode-337202-m02) Calling .GetSSHUsername
	I0919 23:07:26.586975   47753 sshutil.go:53] new ssh client: &{IP:192.168.39.146 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21594-14764/.minikube/machines/multinode-337202-m02/id_rsa Username:docker}
	I0919 23:07:26.669045   47753 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 23:07:26.686908   47753 status.go:176] multinode-337202-m02 status: &{Name:multinode-337202-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0919 23:07:26.686950   47753 status.go:174] checking status of multinode-337202-m03 ...
	I0919 23:07:26.687301   47753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 23:07:26.687348   47753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 23:07:26.701388   47753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42853
	I0919 23:07:26.701881   47753 main.go:141] libmachine: () Calling .GetVersion
	I0919 23:07:26.702350   47753 main.go:141] libmachine: Using API Version  1
	I0919 23:07:26.702376   47753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 23:07:26.702753   47753 main.go:141] libmachine: () Calling .GetMachineName
	I0919 23:07:26.702943   47753 main.go:141] libmachine: (multinode-337202-m03) Calling .GetState
	I0919 23:07:26.704625   47753 status.go:371] multinode-337202-m03 host status = "Stopped" (err=<nil>)
	I0919 23:07:26.704644   47753 status.go:384] host is not running, skipping remaining checks
	I0919 23:07:26.704652   47753 status.go:176] multinode-337202-m03 status: &{Name:multinode-337202-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.74s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (38.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337202 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-337202 node start m03 -v=5 --alsologtostderr: (37.839679635s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337202 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (38.51s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (318.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-337202
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-337202
E0919 23:09:58.388613   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/functional-351278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-337202: (2m39.908832396s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-337202 --wait=true -v=5 --alsologtostderr
E0919 23:11:19.737969   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/addons-266998/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:11:36.655898   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/addons-266998/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:13:01.456337   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/functional-351278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-337202 --wait=true -v=5 --alsologtostderr: (2m38.390981552s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-337202
--- PASS: TestMultiNode/serial/RestartKeepsNodes (318.40s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337202 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-337202 node delete m03: (2.24769077s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337202 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.80s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (168.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337202 stop
E0919 23:14:58.388555   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/functional-351278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-337202 stop: (2m48.054213112s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337202 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-337202 status: exit status 7 (82.194625ms)

                                                
                                                
-- stdout --
	multinode-337202
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-337202-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337202 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-337202 status --alsologtostderr: exit status 7 (81.147265ms)

                                                
                                                
-- stdout --
	multinode-337202
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-337202-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 23:16:14.595358   50625 out.go:360] Setting OutFile to fd 1 ...
	I0919 23:16:14.595652   50625 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 23:16:14.595662   50625 out.go:374] Setting ErrFile to fd 2...
	I0919 23:16:14.595666   50625 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 23:16:14.595916   50625 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14764/.minikube/bin
	I0919 23:16:14.596135   50625 out.go:368] Setting JSON to false
	I0919 23:16:14.596156   50625 mustload.go:65] Loading cluster: multinode-337202
	I0919 23:16:14.596257   50625 notify.go:220] Checking for updates...
	I0919 23:16:14.596621   50625 config.go:182] Loaded profile config "multinode-337202": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 23:16:14.596650   50625 status.go:174] checking status of multinode-337202 ...
	I0919 23:16:14.597194   50625 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 23:16:14.597232   50625 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 23:16:14.610703   50625 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44279
	I0919 23:16:14.611280   50625 main.go:141] libmachine: () Calling .GetVersion
	I0919 23:16:14.611886   50625 main.go:141] libmachine: Using API Version  1
	I0919 23:16:14.611917   50625 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 23:16:14.612222   50625 main.go:141] libmachine: () Calling .GetMachineName
	I0919 23:16:14.612366   50625 main.go:141] libmachine: (multinode-337202) Calling .GetState
	I0919 23:16:14.614268   50625 status.go:371] multinode-337202 host status = "Stopped" (err=<nil>)
	I0919 23:16:14.614293   50625 status.go:384] host is not running, skipping remaining checks
	I0919 23:16:14.614302   50625 status.go:176] multinode-337202 status: &{Name:multinode-337202 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 23:16:14.614338   50625 status.go:174] checking status of multinode-337202-m02 ...
	I0919 23:16:14.614795   50625 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0919 23:16:14.614846   50625 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0919 23:16:14.628269   50625 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42301
	I0919 23:16:14.628690   50625 main.go:141] libmachine: () Calling .GetVersion
	I0919 23:16:14.629331   50625 main.go:141] libmachine: Using API Version  1
	I0919 23:16:14.629360   50625 main.go:141] libmachine: () Calling .SetConfigRaw
	I0919 23:16:14.629755   50625 main.go:141] libmachine: () Calling .GetMachineName
	I0919 23:16:14.630049   50625 main.go:141] libmachine: (multinode-337202-m02) Calling .GetState
	I0919 23:16:14.631795   50625 status.go:371] multinode-337202-m02 host status = "Stopped" (err=<nil>)
	I0919 23:16:14.631810   50625 status.go:384] host is not running, skipping remaining checks
	I0919 23:16:14.631815   50625 status.go:176] multinode-337202-m02 status: &{Name:multinode-337202-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (168.22s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (89.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-337202 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E0919 23:16:36.656454   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/addons-266998/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-337202 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m28.88998073s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-337202 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (89.45s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (42.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-337202
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-337202-m02 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-337202-m02 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 14 (63.607533ms)

                                                
                                                
-- stdout --
	* [multinode-337202-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21594
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21594-14764/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14764/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-337202-m02' is duplicated with machine name 'multinode-337202-m02' in profile 'multinode-337202'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-337202-m03 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-337202-m03 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (41.584906197s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-337202
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-337202: exit status 80 (223.670498ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-337202 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-337202-m03 already exists in multinode-337202-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-337202-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (42.77s)

                                                
                                    
x
+
TestScheduledStopUnix (112.82s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-901597 --memory=3072 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E0919 23:21:36.663944   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/addons-266998/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-901597 --memory=3072 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (41.103355811s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-901597 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-901597 -n scheduled-stop-901597
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-901597 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0919 23:21:44.068460   18671 retry.go:31] will retry after 57.291µs: open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/scheduled-stop-901597/pid: no such file or directory
I0919 23:21:44.069615   18671 retry.go:31] will retry after 221.331µs: open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/scheduled-stop-901597/pid: no such file or directory
I0919 23:21:44.070791   18671 retry.go:31] will retry after 274.68µs: open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/scheduled-stop-901597/pid: no such file or directory
I0919 23:21:44.071944   18671 retry.go:31] will retry after 189.642µs: open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/scheduled-stop-901597/pid: no such file or directory
I0919 23:21:44.073101   18671 retry.go:31] will retry after 564.313µs: open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/scheduled-stop-901597/pid: no such file or directory
I0919 23:21:44.074239   18671 retry.go:31] will retry after 886.535µs: open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/scheduled-stop-901597/pid: no such file or directory
I0919 23:21:44.075395   18671 retry.go:31] will retry after 702.244µs: open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/scheduled-stop-901597/pid: no such file or directory
I0919 23:21:44.076531   18671 retry.go:31] will retry after 1.119522ms: open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/scheduled-stop-901597/pid: no such file or directory
I0919 23:21:44.078717   18671 retry.go:31] will retry after 2.220969ms: open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/scheduled-stop-901597/pid: no such file or directory
I0919 23:21:44.081912   18671 retry.go:31] will retry after 5.676331ms: open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/scheduled-stop-901597/pid: no such file or directory
I0919 23:21:44.088155   18671 retry.go:31] will retry after 5.218223ms: open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/scheduled-stop-901597/pid: no such file or directory
I0919 23:21:44.094431   18671 retry.go:31] will retry after 12.591571ms: open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/scheduled-stop-901597/pid: no such file or directory
I0919 23:21:44.107745   18671 retry.go:31] will retry after 18.446433ms: open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/scheduled-stop-901597/pid: no such file or directory
I0919 23:21:44.127028   18671 retry.go:31] will retry after 18.30732ms: open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/scheduled-stop-901597/pid: no such file or directory
I0919 23:21:44.146323   18671 retry.go:31] will retry after 42.373081ms: open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/scheduled-stop-901597/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-901597 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-901597 -n scheduled-stop-901597
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-901597
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-901597 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-901597
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-901597: exit status 7 (63.762055ms)

                                                
                                                
-- stdout --
	scheduled-stop-901597
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-901597 -n scheduled-stop-901597
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-901597 -n scheduled-stop-901597: exit status 7 (63.038336ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-901597" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-901597
--- PASS: TestScheduledStopUnix (112.82s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (154.48s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.3103431321 start -p running-upgrade-077036 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.3103431321 start -p running-upgrade-077036 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m42.593940079s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-077036 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-077036 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (50.428969889s)
helpers_test.go:175: Cleaning up "running-upgrade-077036" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-077036
--- PASS: TestRunningBinaryUpgrade (154.48s)

                                                
                                    
x
+
TestKubernetesUpgrade (192.2s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-630527 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-630527 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (42.974935731s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-630527
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-630527: (2.008868031s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-630527 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-630527 status --format={{.Host}}: exit status 7 (84.667339ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-630527 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-630527 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m9.261505801s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-630527 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-630527 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-630527 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 106 (109.275077ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-630527] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21594
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21594-14764/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14764/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-630527
	    minikube start -p kubernetes-upgrade-630527 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6305272 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.0, by running:
	    
	    minikube start -p kubernetes-upgrade-630527 --kubernetes-version=v1.34.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-630527 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E0919 23:24:58.387803   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/functional-351278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-630527 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m16.745424039s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-630527" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-630527
--- PASS: TestKubernetesUpgrade (192.20s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.55s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.55s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (134.41s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.3261534518 start -p stopped-upgrade-214844 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.3261534518 start -p stopped-upgrade-214844 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m24.43074854s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.3261534518 -p stopped-upgrade-214844 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.3261534518 -p stopped-upgrade-214844 stop: (4.178427248s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-214844 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-214844 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (45.795457319s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (134.41s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.01s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-214844
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-214844: (1.009712519s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.01s)

                                                
                                    
x
+
TestPause/serial/Start (101.64s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-661050 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-661050 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m41.640294018s)
--- PASS: TestPause/serial/Start (101.64s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-119206 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-119206 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 14 (74.617919ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-119206] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21594
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21594-14764/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14764/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (67.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-119206 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-119206 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m7.115224304s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-119206 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (67.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (30.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-119206 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-119206 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (29.068562271s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-119206 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-119206 status -o json: exit status 2 (256.207292ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-119206","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-119206
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (30.20s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (40.49s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-661050 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-661050 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (40.454404775s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (40.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (26.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-119206 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-119206 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (26.181214092s)
--- PASS: TestNoKubernetes/serial/Start (26.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-024908 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-024908 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 14 (130.266476ms)

                                                
                                                
-- stdout --
	* [false-024908] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21594
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21594-14764/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14764/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 23:27:25.754676   59649 out.go:360] Setting OutFile to fd 1 ...
	I0919 23:27:25.755196   59649 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 23:27:25.755269   59649 out.go:374] Setting ErrFile to fd 2...
	I0919 23:27:25.755278   59649 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 23:27:25.755853   59649 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14764/.minikube/bin
	I0919 23:27:25.756868   59649 out.go:368] Setting JSON to false
	I0919 23:27:25.758593   59649 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":7773,"bootTime":1758316673,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 23:27:25.758783   59649 start.go:140] virtualization: kvm guest
	I0919 23:27:25.760665   59649 out.go:179] * [false-024908] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0919 23:27:25.762261   59649 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 23:27:25.762275   59649 notify.go:220] Checking for updates...
	I0919 23:27:25.763532   59649 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 23:27:25.765687   59649 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-14764/kubeconfig
	I0919 23:27:25.767121   59649 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14764/.minikube
	I0919 23:27:25.768348   59649 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 23:27:25.769575   59649 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 23:27:25.771817   59649 config.go:182] Loaded profile config "NoKubernetes-119206": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0919 23:27:25.771983   59649 config.go:182] Loaded profile config "cert-expiration-265541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 23:27:25.772166   59649 config.go:182] Loaded profile config "pause-661050": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 23:27:25.772361   59649 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 23:27:25.820170   59649 out.go:179] * Using the kvm2 driver based on user configuration
	I0919 23:27:25.821327   59649 start.go:304] selected driver: kvm2
	I0919 23:27:25.821344   59649 start.go:918] validating driver "kvm2" against <nil>
	I0919 23:27:25.821368   59649 start.go:929] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 23:27:25.826879   59649 out.go:203] 
	W0919 23:27:25.828184   59649 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0919 23:27:25.829287   59649 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-024908 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-024908

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-024908

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-024908

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-024908

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-024908

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-024908

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-024908

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-024908

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-024908

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-024908

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-024908"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-024908"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-024908"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-024908

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-024908"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-024908"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-024908" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-024908" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-024908" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-024908" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-024908" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-024908" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-024908" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-024908" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-024908"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-024908"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-024908"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-024908"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-024908"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-024908" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-024908" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-024908" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-024908"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-024908"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-024908"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-024908"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-024908"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21594-14764/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 19 Sep 2025 23:26:54 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.39.139:8443
name: cert-expiration-265541
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21594-14764/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 19 Sep 2025 23:27:23 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.61.95:8443
name: pause-661050
contexts:
- context:
cluster: cert-expiration-265541
extensions:
- extension:
last-update: Fri, 19 Sep 2025 23:26:54 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-265541
name: cert-expiration-265541
- context:
cluster: pause-661050
extensions:
- extension:
last-update: Fri, 19 Sep 2025 23:27:23 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-661050
name: pause-661050
current-context: pause-661050
kind: Config
users:
- name: cert-expiration-265541
user:
client-certificate: /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/cert-expiration-265541/client.crt
client-key: /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/cert-expiration-265541/client.key
- name: pause-661050
user:
client-certificate: /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/pause-661050/client.crt
client-key: /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/pause-661050/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-024908

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-024908"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-024908"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-024908"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-024908"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-024908"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-024908"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-024908"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-024908"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-024908"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-024908"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-024908"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-024908"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-024908"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-024908"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-024908"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-024908"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-024908"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-024908"

                                                
                                                
----------------------- debugLogs end: false-024908 [took: 2.888758295s] --------------------------------
helpers_test.go:175: Cleaning up "false-024908" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-024908
--- PASS: TestNetworkPlugins/group/false (3.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-119206 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-119206 "sudo systemctl is-active --quiet service kubelet": exit status 1 (216.339815ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (2.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:181: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (1.714301592s)
--- PASS: TestNoKubernetes/serial/ProfileList (2.52s)

                                                
                                    
x
+
TestPause/serial/Pause (0.97s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-661050 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.97s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.28s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-661050 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-661050 --output=json --layout=cluster: exit status 2 (279.967781ms)

                                                
                                                
-- stdout --
	{"Name":"pause-661050","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-661050","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.28s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.83s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-661050 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-119206
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-119206: (1.52100364s)
--- PASS: TestNoKubernetes/serial/Stop (1.52s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.07s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-661050 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-661050 --alsologtostderr -v=5: (1.068990815s)
--- PASS: TestPause/serial/PauseAgain (1.07s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.9s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-661050 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (35.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-119206 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-119206 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (35.42689141s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (35.43s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.37s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (127.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-551579 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0
E0919 23:27:59.739763   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/addons-266998/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-551579 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0: (2m7.860847339s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (127.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-119206 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-119206 "sudo systemctl is-active --quiet service kubelet": exit status 1 (208.758816ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (123.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-065517 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-065517 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0: (2m3.153626157s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (123.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (125.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-669238 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0
E0919 23:29:41.458782   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/functional-351278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-669238 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0: (2m5.044017663s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (125.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-551579 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [503dbb3d-3530-4fe9-90d7-7c3ecb7dcb80] Pending
helpers_test.go:352: "busybox" [503dbb3d-3530-4fe9-90d7-7c3ecb7dcb80] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [503dbb3d-3530-4fe9-90d7-7c3ecb7dcb80] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.004098134s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-551579 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-551579 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-551579 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.162678509s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-551579 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (89.6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-551579 --alsologtostderr -v=3
E0919 23:29:58.387695   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/functional-351278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-551579 --alsologtostderr -v=3: (1m29.601301083s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (89.60s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-065517 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [d03b504b-b17c-4d3a-9317-a6b2d7d697c9] Pending
helpers_test.go:352: "busybox" [d03b504b-b17c-4d3a-9317-a6b2d7d697c9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [d03b504b-b17c-4d3a-9317-a6b2d7d697c9] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004297362s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-065517 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (45.84s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-155149 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-155149 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0: (45.843611754s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (45.84s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-669238 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [eaa0acee-05e4-4de4-8f38-e18d925cfbb2] Pending
helpers_test.go:352: "busybox" [eaa0acee-05e4-4de4-8f38-e18d925cfbb2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [eaa0acee-05e4-4de4-8f38-e18d925cfbb2] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.006751723s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-669238 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.82s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-065517 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-065517 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.746037205s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-065517 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.82s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (83.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-065517 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-065517 --alsologtostderr -v=3: (1m23.929223544s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (83.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-669238 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-669238 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.021495132s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-669238 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (82.62s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-669238 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-669238 --alsologtostderr -v=3: (1m22.615187084s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (82.62s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-155149 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-155149 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.345673042s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.04s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-155149 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-155149 --alsologtostderr -v=3: (11.038618863s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.04s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-155149 -n newest-cni-155149
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-155149 -n newest-cni-155149: exit status 7 (62.262663ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-155149 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (37.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-155149 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-155149 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0: (36.967420972s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-155149 -n newest-cni-155149
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (37.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-551579 -n old-k8s-version-551579
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-551579 -n old-k8s-version-551579: exit status 7 (75.687839ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-551579 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (55.8s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-551579 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0
E0919 23:31:36.656520   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/addons-266998/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-551579 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0: (55.417015725s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-551579 -n old-k8s-version-551579
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (55.80s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-065517 -n no-preload-065517
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-065517 -n no-preload-065517: exit status 7 (81.239252ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-065517 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (64.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-065517 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-065517 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0: (1m3.512691434s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-065517 -n no-preload-065517
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (64.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-155149 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (4.43s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-155149 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-155149 --alsologtostderr -v=1: (1.262256834s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-155149 -n newest-cni-155149
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-155149 -n newest-cni-155149: exit status 2 (461.817609ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-155149 -n newest-cni-155149
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-155149 -n newest-cni-155149: exit status 2 (433.553911ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-155149 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p newest-cni-155149 --alsologtostderr -v=1: (1.407192836s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-155149 -n newest-cni-155149
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-155149 -n newest-cni-155149
--- PASS: TestStartStop/group/newest-cni/serial/Pause (4.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-669238 -n embed-certs-669238
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-669238 -n embed-certs-669238: exit status 7 (97.24885ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-669238 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (65.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-669238 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-669238 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0: (1m4.914289874s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-669238 -n embed-certs-669238
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (65.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (116.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-304197 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-304197 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0: (1m56.294000634s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (116.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (15.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-b8clv" [87b63ea8-a896-4f9a-bf6c-7f2d10f86395] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-b8clv" [87b63ea8-a896-4f9a-bf6c-7f2d10f86395] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 15.054102109s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (15.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-b8clv" [87b63ea8-a896-4f9a-bf6c-7f2d10f86395] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004829418s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-551579 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-551579 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.66s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-551579 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p old-k8s-version-551579 --alsologtostderr -v=1: (1.223208356s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-551579 -n old-k8s-version-551579
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-551579 -n old-k8s-version-551579: exit status 2 (315.451623ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-551579 -n old-k8s-version-551579
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-551579 -n old-k8s-version-551579: exit status 2 (294.497964ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-551579 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p old-k8s-version-551579 --alsologtostderr -v=1: (1.001573158s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-551579 -n old-k8s-version-551579
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-551579 -n old-k8s-version-551579
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (94.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-024908 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-024908 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m34.875236714s)
--- PASS: TestNetworkPlugins/group/auto/Start (94.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (17.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-j8hfm" [18c9e3a6-4306-4a00-983c-d5685df23cc3] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-j8hfm" [18c9e3a6-4306-4a00-983c-d5685df23cc3] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 17.004542567s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (17.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-zwhbb" [6f64b943-8cae-4dd1-a722-45b831f02812] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.009417347s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-zwhbb" [6f64b943-8cae-4dd1-a722-45b831f02812] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006360969s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-669238 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-669238 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-669238 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p embed-certs-669238 --alsologtostderr -v=1: (1.0153322s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-669238 -n embed-certs-669238
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-669238 -n embed-certs-669238: exit status 2 (268.268932ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-669238 -n embed-certs-669238
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-669238 -n embed-certs-669238: exit status 2 (291.40215ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-669238 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p embed-certs-669238 --alsologtostderr -v=1: (1.001916312s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-669238 -n embed-certs-669238
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-669238 -n embed-certs-669238
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-j8hfm" [18c9e3a6-4306-4a00-983c-d5685df23cc3] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004066549s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-065517 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (66.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-024908 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-024908 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m6.458536143s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (66.46s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-065517 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (4.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-065517 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p no-preload-065517 --alsologtostderr -v=1: (1.264705978s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-065517 -n no-preload-065517
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-065517 -n no-preload-065517: exit status 2 (378.957816ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-065517 -n no-preload-065517
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-065517 -n no-preload-065517: exit status 2 (349.133065ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-065517 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p no-preload-065517 --alsologtostderr -v=1: (1.123070107s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-065517 -n no-preload-065517
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-065517 -n no-preload-065517
--- PASS: TestStartStop/group/no-preload/serial/Pause (4.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (87.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-024908 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-024908 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m27.210948706s)
--- PASS: TestNetworkPlugins/group/calico/Start (87.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-304197 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [874b675f-ecbe-4052-a6fb-bc7a6028db03] Pending
helpers_test.go:352: "busybox" [874b675f-ecbe-4052-a6fb-bc7a6028db03] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [874b675f-ecbe-4052-a6fb-bc7a6028db03] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.007975657s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-304197 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-304197 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-304197 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.233227325s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-304197 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (89.48s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-304197 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-304197 --alsologtostderr -v=3: (1m29.478585834s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (89.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-fgg9q" [829046bf-dc96-4e4c-aebd-d90b41d9794c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005580716s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-024908 "pgrep -a kubelet"
I0919 23:34:24.154973   18671 config.go:182] Loaded profile config "auto-024908": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-024908 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-4lwvm" [dce04d73-1d93-46de-a350-1443cf1aa491] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-4lwvm" [dce04d73-1d93-46de-a350-1443cf1aa491] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.007154581s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-024908 "pgrep -a kubelet"
I0919 23:34:29.840498   18671 config.go:182] Loaded profile config "kindnet-024908": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-024908 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-kzxvt" [01ad8da6-e79f-4070-ba08-84ceea171f26] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-kzxvt" [01ad8da6-e79f-4070-ba08-84ceea171f26] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.006592344s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-024908 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-024908 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-024908 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-024908 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-024908 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-024908 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (75.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-024908 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-024908 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m15.840222176s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (75.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-n4bhs" [a7124722-cb76-4fc0-91bc-4ad85612b074] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
E0919 23:34:52.558656   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/old-k8s-version-551579/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.007886193s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (100.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-024908 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-024908 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m40.254130638s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (100.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-024908 "pgrep -a kubelet"
I0919 23:34:57.665322   18671 config.go:182] Loaded profile config "calico-024908": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-024908 replace --force -f testdata/netcat-deployment.yaml
E0919 23:34:57.680803   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/old-k8s-version-551579/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-nmjgg" [cf4d3488-e2aa-4208-802a-bcef4380afba] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0919 23:34:58.388330   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/functional-351278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-nmjgg" [cf4d3488-e2aa-4208-802a-bcef4380afba] Running
E0919 23:35:07.922635   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/old-k8s-version-551579/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.004053081s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-024908 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-024908 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-024908 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (81.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-024908 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-024908 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m21.127763952s)
--- PASS: TestNetworkPlugins/group/flannel/Start (81.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-304197 -n default-k8s-diff-port-304197
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-304197 -n default-k8s-diff-port-304197: exit status 7 (81.526274ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-304197 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (69.83s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-304197 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0
E0919 23:35:38.351005   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/no-preload-065517/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:35:58.833302   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/no-preload-065517/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-304197 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.0: (1m9.492147144s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-304197 -n default-k8s-diff-port-304197
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (69.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-024908 "pgrep -a kubelet"
I0919 23:36:07.221630   18671 config.go:182] Loaded profile config "custom-flannel-024908": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-024908 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-t7hhd" [9f50870f-6e9d-431e-b973-c533cd0a06ab] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0919 23:36:09.366404   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/old-k8s-version-551579/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-t7hhd" [9f50870f-6e9d-431e-b973-c533cd0a06ab] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.005082592s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-024908 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-024908 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-024908 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-024908 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-024908 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-pg9xt" [e5ed3490-33b6-491a-b774-853523b75412] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-pg9xt" [e5ed3490-33b6-491a-b774-853523b75412] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.004391297s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (87.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-024908 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E0919 23:36:39.795002   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/no-preload-065517/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-024908 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m27.755818871s)
--- PASS: TestNetworkPlugins/group/bridge/Start (87.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-024908 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-024908 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-266jx" [2a2ce4c1-817d-4178-90b5-044bd8e995da] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004145585s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-024908 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-024908 "pgrep -a kubelet"
I0919 23:36:56.499045   18671 config.go:182] Loaded profile config "flannel-024908": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-024908 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-n548x" [359a86ff-7004-46ef-8f4a-3dfcf72f2d61] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-n548x" [359a86ff-7004-46ef-8f4a-3dfcf72f2d61] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.0054962s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-024908 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-024908 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-024908 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-024908 "pgrep -a kubelet"
I0919 23:38:06.441989   18671 config.go:182] Loaded profile config "bridge-024908": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-024908 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-npvj4" [7cd27339-eef9-48bd-a868-a2f0502f3363] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-npvj4" [7cd27339-eef9-48bd-a868-a2f0502f3363] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.005184353s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-024908 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-024908 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-024908 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)
E0919 23:39:23.630037   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/kindnet-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:39:23.636455   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/kindnet-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:39:23.647974   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/kindnet-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:39:23.669433   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/kindnet-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:39:23.710939   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/kindnet-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:39:23.792454   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/kindnet-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:39:23.954029   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/kindnet-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:39:24.275769   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/kindnet-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:39:24.432400   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/auto-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:39:24.438851   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/auto-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:39:24.450283   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/auto-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:39:24.471743   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/auto-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:39:24.513209   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/auto-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:39:24.594666   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/auto-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:39:24.756830   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/auto-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:39:24.917377   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/kindnet-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:39:25.078915   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/auto-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:39:25.721292   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/auto-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:39:26.199179   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/kindnet-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:39:27.003312   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/auto-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:39:28.760463   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/kindnet-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:39:29.564617   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/auto-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:39:33.882270   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/kindnet-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:39:34.686017   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/auto-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:39:44.124103   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/kindnet-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:39:44.927905   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/auto-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:39:47.424274   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/old-k8s-version-551579/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:39:51.432554   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/calico-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:39:51.439038   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/calico-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:39:51.450501   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/calico-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:39:51.471986   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/calico-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:39:51.513415   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/calico-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:39:51.595003   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/calico-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:39:51.756351   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/calico-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:39:52.078103   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/calico-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:39:52.720219   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/calico-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:39:54.001623   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/calico-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:39:56.563751   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/calico-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:39:58.387946   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/functional-351278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:40:01.685055   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/calico-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:40:04.606160   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/kindnet-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:40:05.410073   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/auto-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:40:11.926996   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/calico-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:40:15.130871   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/old-k8s-version-551579/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:40:17.853170   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/no-preload-065517/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:40:32.408363   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/calico-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:40:45.558609   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/no-preload-065517/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:40:45.568071   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/kindnet-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:40:46.372149   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/auto-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:41:07.480425   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/custom-flannel-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:41:07.486802   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/custom-flannel-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:41:07.498209   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/custom-flannel-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:41:07.519654   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/custom-flannel-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:41:07.561109   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/custom-flannel-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:41:07.643132   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/custom-flannel-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:41:07.804674   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/custom-flannel-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:41:08.126834   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/custom-flannel-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:41:08.768297   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/custom-flannel-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:41:10.050358   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/custom-flannel-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:41:12.611659   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/custom-flannel-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:41:13.369857   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/calico-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:41:17.733115   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/custom-flannel-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:41:27.975104   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/custom-flannel-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:41:36.656686   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/addons-266998/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:41:38.006602   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/enable-default-cni-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:41:38.013056   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/enable-default-cni-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:41:38.024510   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/enable-default-cni-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:41:38.045916   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/enable-default-cni-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:41:38.087336   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/enable-default-cni-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:41:38.168806   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/enable-default-cni-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:41:38.330352   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/enable-default-cni-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:41:38.652426   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/enable-default-cni-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:41:39.294481   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/enable-default-cni-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:41:40.576702   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/enable-default-cni-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:41:43.138255   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/enable-default-cni-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:41:48.260281   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/enable-default-cni-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:41:48.457055   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/custom-flannel-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:41:50.250979   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/flannel-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:41:50.257349   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/flannel-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:41:50.268779   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/flannel-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:41:50.290241   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/flannel-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:41:50.331696   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/flannel-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:41:50.413216   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/flannel-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:41:50.574847   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/flannel-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:41:50.896712   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/flannel-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:41:51.538858   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/flannel-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:41:52.820566   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/flannel-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:41:55.382050   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/flannel-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:41:58.501622   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/enable-default-cni-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:42:00.503518   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/flannel-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:42:07.490088   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/kindnet-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:42:08.294286   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/auto-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:42:10.745440   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/flannel-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:42:18.983549   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/enable-default-cni-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:42:29.419240   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/custom-flannel-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:42:31.227383   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/flannel-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:42:35.291783   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/calico-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:42:59.945616   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/enable-default-cni-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:43:06.672619   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:43:06.679054   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:43:06.690441   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:43:06.711877   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:43:06.753325   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:43:06.834796   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:43:06.996379   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:43:07.318098   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:43:07.959758   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:43:09.242010   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:43:11.803363   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:43:12.188909   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/flannel-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:43:16.925749   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:43:27.167462   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:43:47.648867   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:43:51.340830   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/custom-flannel-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:44:21.867041   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/enable-default-cni-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:44:23.630182   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/kindnet-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:44:24.432397   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/auto-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:44:28.610715   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/bridge-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:44:34.111164   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/flannel-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:44:39.742071   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/addons-266998/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:44:47.423951   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/old-k8s-version-551579/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:44:51.332142   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/kindnet-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:44:51.432718   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/calico-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:44:52.136083   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/auto-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:44:58.388268   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/functional-351278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:45:17.852854   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/no-preload-065517/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:45:19.133396   18671 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/calico-024908/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-304197 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.82s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-304197 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-304197 -n default-k8s-diff-port-304197
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-304197 -n default-k8s-diff-port-304197: exit status 2 (263.116833ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-304197 -n default-k8s-diff-port-304197
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-304197 -n default-k8s-diff-port-304197: exit status 2 (259.584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-304197 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-304197 -n default-k8s-diff-port-304197
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-304197 -n default-k8s-diff-port-304197
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.82s)

                                                
                                    

Test skip (40/330)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.0/cached-images 0
15 TestDownloadOnly/v1.34.0/binaries 0
16 TestDownloadOnly/v1.34.0/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.3
33 TestAddons/serial/GCPAuth/RealCredentials 0
40 TestAddons/parallel/Olm 0
47 TestAddons/parallel/AmdGpuDevicePlugin 0
51 TestDockerFlags 0
54 TestDockerEnvContainerd 0
56 TestHyperKitDriverInstallOrUpdate 0
57 TestHyperkitDriverSkipUpgrade 0
108 TestFunctional/parallel/DockerEnv 0
109 TestFunctional/parallel/PodmanEnv 0
118 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.02
119 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
120 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
121 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
122 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
123 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
124 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
125 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
157 TestFunctionalNewestKubernetes 0
158 TestGvisorAddon 0
180 TestImageBuild 0
207 TestKicCustomNetwork 0
208 TestKicExistingNetwork 0
209 TestKicCustomSubnet 0
210 TestKicStaticIP 0
242 TestChangeNoneUser 0
245 TestScheduledStopWindows 0
247 TestSkaffold 0
249 TestInsufficientStorage 0
253 TestMissingContainerUpgrade 0
262 TestStartStop/group/disable-driver-mounts 0.16
274 TestNetworkPlugins/group/kubenet 3.57
282 TestNetworkPlugins/group/cilium 3.52
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.3s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-266998 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.30s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-435865" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-435865
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-024908 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-024908

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-024908

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-024908

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-024908

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-024908

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-024908

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-024908

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-024908

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-024908

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-024908

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-024908"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-024908"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-024908"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-024908

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-024908"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-024908"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-024908" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-024908" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-024908" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-024908" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-024908" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-024908" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-024908" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-024908" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-024908"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-024908"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-024908"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-024908"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-024908"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-024908" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-024908" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-024908" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-024908"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-024908"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-024908"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-024908"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-024908"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21594-14764/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 19 Sep 2025 23:26:54 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.39.139:8443
name: cert-expiration-265541
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21594-14764/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 19 Sep 2025 23:27:23 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.61.95:8443
name: pause-661050
contexts:
- context:
cluster: cert-expiration-265541
extensions:
- extension:
last-update: Fri, 19 Sep 2025 23:26:54 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-265541
name: cert-expiration-265541
- context:
cluster: pause-661050
extensions:
- extension:
last-update: Fri, 19 Sep 2025 23:27:23 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-661050
name: pause-661050
current-context: pause-661050
kind: Config
users:
- name: cert-expiration-265541
user:
client-certificate: /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/cert-expiration-265541/client.crt
client-key: /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/cert-expiration-265541/client.key
- name: pause-661050
user:
client-certificate: /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/pause-661050/client.crt
client-key: /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/pause-661050/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-024908

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-024908"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-024908"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-024908"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-024908"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-024908"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-024908"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-024908"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-024908"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-024908"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-024908"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-024908"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-024908"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-024908"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-024908"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-024908"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-024908"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-024908"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-024908"

                                                
                                                
----------------------- debugLogs end: kubenet-024908 [took: 3.39296021s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-024908" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-024908
--- SKIP: TestNetworkPlugins/group/kubenet (3.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-024908 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-024908

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-024908

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-024908

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-024908

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-024908

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-024908

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-024908

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-024908

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-024908

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-024908

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-024908"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-024908"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-024908"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-024908

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-024908"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-024908"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-024908" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-024908" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-024908" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-024908" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-024908" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-024908" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-024908" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-024908" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-024908"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-024908"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-024908"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-024908"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-024908"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-024908

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-024908

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-024908" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-024908" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-024908

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-024908

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-024908" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-024908" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-024908" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-024908" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-024908" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-024908"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-024908"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-024908"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-024908"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-024908"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21594-14764/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 19 Sep 2025 23:26:54 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.39.139:8443
name: cert-expiration-265541
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21594-14764/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 19 Sep 2025 23:27:23 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.61.95:8443
name: pause-661050
contexts:
- context:
cluster: cert-expiration-265541
extensions:
- extension:
last-update: Fri, 19 Sep 2025 23:26:54 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-265541
name: cert-expiration-265541
- context:
cluster: pause-661050
extensions:
- extension:
last-update: Fri, 19 Sep 2025 23:27:23 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-661050
name: pause-661050
current-context: pause-661050
kind: Config
users:
- name: cert-expiration-265541
user:
client-certificate: /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/cert-expiration-265541/client.crt
client-key: /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/cert-expiration-265541/client.key
- name: pause-661050
user:
client-certificate: /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/pause-661050/client.crt
client-key: /home/jenkins/minikube-integration/21594-14764/.minikube/profiles/pause-661050/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-024908

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-024908"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-024908"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-024908"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-024908"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-024908"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-024908"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-024908"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-024908"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-024908"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-024908"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-024908"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-024908"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-024908"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-024908"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-024908"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-024908"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-024908"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-024908" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-024908"

                                                
                                                
----------------------- debugLogs end: cilium-024908 [took: 3.350010669s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-024908" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-024908
--- SKIP: TestNetworkPlugins/group/cilium (3.52s)

                                                
                                    
Copied to clipboard