Test Report: KVM_Linux_crio 22049

                    
                      b350bc6d66813cad84bbff620e1b65ef38f64c38:2025-12-06:42657
                    
                

Test fail (4/431)

Order failed test Duration
46 TestAddons/parallel/Ingress 155.82
108 TestFunctional/parallel/PersistentVolumeClaim 370.01
345 TestPreload 159.79
368 TestPause/serial/SecondStartNoReconfiguration 67.43
x
+
TestAddons/parallel/Ingress (155.82s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-618522 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-618522 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-618522 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [1d05c5f3-11c3-43f8-871c-1feba1d97857] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [1d05c5f3-11c3-43f8-871c-1feba1d97857] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.01067809s
I1206 08:32:05.268420    9552 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-618522 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-618522 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m12.961407295s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-618522 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-618522 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.168
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-618522 -n addons-618522
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-618522 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-618522 logs -n 25: (1.272770468s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-807354                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-807354 │ jenkins │ v1.37.0 │ 06 Dec 25 08:29 UTC │ 06 Dec 25 08:29 UTC │
	│ start   │ --download-only -p binary-mirror-499439 --alsologtostderr --binary-mirror http://127.0.0.1:45531 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-499439 │ jenkins │ v1.37.0 │ 06 Dec 25 08:29 UTC │                     │
	│ delete  │ -p binary-mirror-499439                                                                                                                                                                                                                                                                                                                                                                                                                                                │ binary-mirror-499439 │ jenkins │ v1.37.0 │ 06 Dec 25 08:29 UTC │ 06 Dec 25 08:29 UTC │
	│ addons  │ disable dashboard -p addons-618522                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-618522        │ jenkins │ v1.37.0 │ 06 Dec 25 08:29 UTC │                     │
	│ addons  │ enable dashboard -p addons-618522                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-618522        │ jenkins │ v1.37.0 │ 06 Dec 25 08:29 UTC │                     │
	│ start   │ -p addons-618522 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-618522        │ jenkins │ v1.37.0 │ 06 Dec 25 08:29 UTC │ 06 Dec 25 08:31 UTC │
	│ addons  │ addons-618522 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-618522        │ jenkins │ v1.37.0 │ 06 Dec 25 08:31 UTC │ 06 Dec 25 08:31 UTC │
	│ addons  │ addons-618522 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-618522        │ jenkins │ v1.37.0 │ 06 Dec 25 08:31 UTC │ 06 Dec 25 08:31 UTC │
	│ addons  │ enable headlamp -p addons-618522 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-618522        │ jenkins │ v1.37.0 │ 06 Dec 25 08:31 UTC │ 06 Dec 25 08:31 UTC │
	│ addons  │ addons-618522 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                               │ addons-618522        │ jenkins │ v1.37.0 │ 06 Dec 25 08:31 UTC │ 06 Dec 25 08:31 UTC │
	│ addons  │ addons-618522 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-618522        │ jenkins │ v1.37.0 │ 06 Dec 25 08:31 UTC │ 06 Dec 25 08:31 UTC │
	│ addons  │ addons-618522 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-618522        │ jenkins │ v1.37.0 │ 06 Dec 25 08:31 UTC │ 06 Dec 25 08:32 UTC │
	│ addons  │ addons-618522 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-618522        │ jenkins │ v1.37.0 │ 06 Dec 25 08:32 UTC │ 06 Dec 25 08:32 UTC │
	│ ip      │ addons-618522 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-618522        │ jenkins │ v1.37.0 │ 06 Dec 25 08:32 UTC │ 06 Dec 25 08:32 UTC │
	│ addons  │ addons-618522 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-618522        │ jenkins │ v1.37.0 │ 06 Dec 25 08:32 UTC │ 06 Dec 25 08:32 UTC │
	│ ssh     │ addons-618522 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                               │ addons-618522        │ jenkins │ v1.37.0 │ 06 Dec 25 08:32 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-618522                                                                                                                                                                                                                                                                                                                                                                                         │ addons-618522        │ jenkins │ v1.37.0 │ 06 Dec 25 08:32 UTC │ 06 Dec 25 08:32 UTC │
	│ addons  │ addons-618522 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-618522        │ jenkins │ v1.37.0 │ 06 Dec 25 08:32 UTC │ 06 Dec 25 08:32 UTC │
	│ addons  │ addons-618522 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-618522        │ jenkins │ v1.37.0 │ 06 Dec 25 08:32 UTC │ 06 Dec 25 08:32 UTC │
	│ addons  │ addons-618522 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-618522        │ jenkins │ v1.37.0 │ 06 Dec 25 08:32 UTC │ 06 Dec 25 08:32 UTC │
	│ ssh     │ addons-618522 ssh cat /opt/local-path-provisioner/pvc-c8bb1d8f-4c87-4fdb-8a4a-d380c7c73589_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                      │ addons-618522        │ jenkins │ v1.37.0 │ 06 Dec 25 08:32 UTC │ 06 Dec 25 08:32 UTC │
	│ addons  │ addons-618522 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                        │ addons-618522        │ jenkins │ v1.37.0 │ 06 Dec 25 08:32 UTC │ 06 Dec 25 08:32 UTC │
	│ addons  │ addons-618522 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-618522        │ jenkins │ v1.37.0 │ 06 Dec 25 08:33 UTC │ 06 Dec 25 08:33 UTC │
	│ addons  │ addons-618522 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-618522        │ jenkins │ v1.37.0 │ 06 Dec 25 08:33 UTC │ 06 Dec 25 08:33 UTC │
	│ ip      │ addons-618522 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-618522        │ jenkins │ v1.37.0 │ 06 Dec 25 08:34 UTC │ 06 Dec 25 08:34 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 08:29:05
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 08:29:05.698070   10525 out.go:360] Setting OutFile to fd 1 ...
	I1206 08:29:05.698178   10525 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:29:05.698182   10525 out.go:374] Setting ErrFile to fd 2...
	I1206 08:29:05.698187   10525 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:29:05.698396   10525 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5603/.minikube/bin
	I1206 08:29:05.698928   10525 out.go:368] Setting JSON to false
	I1206 08:29:05.699711   10525 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":688,"bootTime":1765009058,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 08:29:05.699776   10525 start.go:143] virtualization: kvm guest
	I1206 08:29:05.701836   10525 out.go:179] * [addons-618522] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 08:29:05.703286   10525 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 08:29:05.703296   10525 notify.go:221] Checking for updates...
	I1206 08:29:05.705593   10525 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 08:29:05.706685   10525 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-5603/kubeconfig
	I1206 08:29:05.707739   10525 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-5603/.minikube
	I1206 08:29:05.708774   10525 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 08:29:05.709883   10525 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 08:29:05.711084   10525 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 08:29:05.741890   10525 out.go:179] * Using the kvm2 driver based on user configuration
	I1206 08:29:05.742910   10525 start.go:309] selected driver: kvm2
	I1206 08:29:05.742926   10525 start.go:927] validating driver "kvm2" against <nil>
	I1206 08:29:05.742943   10525 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 08:29:05.743959   10525 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1206 08:29:05.744281   10525 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 08:29:05.744326   10525 cni.go:84] Creating CNI manager for ""
	I1206 08:29:05.744379   10525 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 08:29:05.744391   10525 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1206 08:29:05.744437   10525 start.go:353] cluster config:
	{Name:addons-618522 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-618522 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1206 08:29:05.744578   10525 iso.go:125] acquiring lock: {Name:mk30cf35cfaf5c28a2b5f78c7b431de5eb8c8e82 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 08:29:05.746546   10525 out.go:179] * Starting "addons-618522" primary control-plane node in "addons-618522" cluster
	I1206 08:29:05.747565   10525 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 08:29:05.747593   10525 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22049-5603/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1206 08:29:05.747610   10525 cache.go:65] Caching tarball of preloaded images
	I1206 08:29:05.747697   10525 preload.go:238] Found /home/jenkins/minikube-integration/22049-5603/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1206 08:29:05.747708   10525 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1206 08:29:05.747989   10525 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/config.json ...
	I1206 08:29:05.748058   10525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/config.json: {Name:mk7f9da94ca10d314b801d8105975097da70fef6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 08:29:05.748190   10525 start.go:360] acquireMachinesLock for addons-618522: {Name:mk3342af5720fb96b5115fa945410cab4f7bd1fb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1206 08:29:05.748231   10525 start.go:364] duration metric: took 28.823µs to acquireMachinesLock for "addons-618522"
	I1206 08:29:05.748248   10525 start.go:93] Provisioning new machine with config: &{Name:addons-618522 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:addons-618522 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 08:29:05.748294   10525 start.go:125] createHost starting for "" (driver="kvm2")
	I1206 08:29:05.749716   10525 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1206 08:29:05.749861   10525 start.go:159] libmachine.API.Create for "addons-618522" (driver="kvm2")
	I1206 08:29:05.749888   10525 client.go:173] LocalClient.Create starting
	I1206 08:29:05.749978   10525 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22049-5603/.minikube/certs/ca.pem
	I1206 08:29:05.781012   10525 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22049-5603/.minikube/certs/cert.pem
	I1206 08:29:05.906650   10525 main.go:143] libmachine: creating domain...
	I1206 08:29:05.906675   10525 main.go:143] libmachine: creating network...
	I1206 08:29:05.908021   10525 main.go:143] libmachine: found existing default network
	I1206 08:29:05.908193   10525 main.go:143] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1206 08:29:05.908727   10525 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00157fc80}
	I1206 08:29:05.908828   10525 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-618522</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1206 08:29:05.914620   10525 main.go:143] libmachine: creating private network mk-addons-618522 192.168.39.0/24...
	I1206 08:29:05.983441   10525 main.go:143] libmachine: private network mk-addons-618522 192.168.39.0/24 created
	I1206 08:29:05.983758   10525 main.go:143] libmachine: <network>
	  <name>mk-addons-618522</name>
	  <uuid>b78eb98a-a065-4470-8e55-ee6c47b15f2f</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:7c:55:b3'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1206 08:29:05.983788   10525 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/22049-5603/.minikube/machines/addons-618522 ...
	I1206 08:29:05.983815   10525 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/22049-5603/.minikube/cache/iso/amd64/minikube-v1.37.0-1764843329-22032-amd64.iso
	I1206 08:29:05.983827   10525 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22049-5603/.minikube
	I1206 08:29:05.983908   10525 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/22049-5603/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22049-5603/.minikube/cache/iso/amd64/minikube-v1.37.0-1764843329-22032-amd64.iso...
	I1206 08:29:06.269048   10525 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22049-5603/.minikube/machines/addons-618522/id_rsa...
	I1206 08:29:06.417744   10525 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22049-5603/.minikube/machines/addons-618522/addons-618522.rawdisk...
	I1206 08:29:06.417791   10525 main.go:143] libmachine: Writing magic tar header
	I1206 08:29:06.417812   10525 main.go:143] libmachine: Writing SSH key tar header
	I1206 08:29:06.417883   10525 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22049-5603/.minikube/machines/addons-618522 ...
	I1206 08:29:06.417945   10525 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22049-5603/.minikube/machines/addons-618522
	I1206 08:29:06.417999   10525 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22049-5603/.minikube/machines/addons-618522 (perms=drwx------)
	I1206 08:29:06.418017   10525 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22049-5603/.minikube/machines
	I1206 08:29:06.418026   10525 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22049-5603/.minikube/machines (perms=drwxr-xr-x)
	I1206 08:29:06.418037   10525 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22049-5603/.minikube
	I1206 08:29:06.418048   10525 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22049-5603/.minikube (perms=drwxr-xr-x)
	I1206 08:29:06.418058   10525 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22049-5603
	I1206 08:29:06.418076   10525 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22049-5603 (perms=drwxrwxr-x)
	I1206 08:29:06.418086   10525 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1206 08:29:06.418096   10525 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1206 08:29:06.418106   10525 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1206 08:29:06.418115   10525 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1206 08:29:06.418124   10525 main.go:143] libmachine: checking permissions on dir: /home
	I1206 08:29:06.418133   10525 main.go:143] libmachine: skipping /home - not owner
	I1206 08:29:06.418137   10525 main.go:143] libmachine: defining domain...
	I1206 08:29:06.419460   10525 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-618522</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/22049-5603/.minikube/machines/addons-618522/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/22049-5603/.minikube/machines/addons-618522/addons-618522.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-618522'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1206 08:29:06.426746   10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:3d:72:3b in network default
	I1206 08:29:06.427302   10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:06.427320   10525 main.go:143] libmachine: starting domain...
	I1206 08:29:06.427327   10525 main.go:143] libmachine: ensuring networks are active...
	I1206 08:29:06.427982   10525 main.go:143] libmachine: Ensuring network default is active
	I1206 08:29:06.428312   10525 main.go:143] libmachine: Ensuring network mk-addons-618522 is active
	I1206 08:29:06.428896   10525 main.go:143] libmachine: getting domain XML...
	I1206 08:29:06.429878   10525 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-618522</name>
	  <uuid>57f399cc-dddf-4d4f-b1df-b1180b83c0f4</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22049-5603/.minikube/machines/addons-618522/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22049-5603/.minikube/machines/addons-618522/addons-618522.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:96:93:89'/>
	      <source network='mk-addons-618522'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:3d:72:3b'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1206 08:29:07.715375   10525 main.go:143] libmachine: waiting for domain to start...
	I1206 08:29:07.716792   10525 main.go:143] libmachine: domain is now running
	I1206 08:29:07.716811   10525 main.go:143] libmachine: waiting for IP...
	I1206 08:29:07.717542   10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:07.717956   10525 main.go:143] libmachine: no network interface addresses found for domain addons-618522 (source=lease)
	I1206 08:29:07.717998   10525 main.go:143] libmachine: trying to list again with source=arp
	I1206 08:29:07.718255   10525 main.go:143] libmachine: unable to find current IP address of domain addons-618522 in network mk-addons-618522 (interfaces detected: [])
	I1206 08:29:07.718297   10525 retry.go:31] will retry after 266.106603ms: waiting for domain to come up
	I1206 08:29:07.985832   10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:07.986398   10525 main.go:143] libmachine: no network interface addresses found for domain addons-618522 (source=lease)
	I1206 08:29:07.986415   10525 main.go:143] libmachine: trying to list again with source=arp
	I1206 08:29:07.986761   10525 main.go:143] libmachine: unable to find current IP address of domain addons-618522 in network mk-addons-618522 (interfaces detected: [])
	I1206 08:29:07.986801   10525 retry.go:31] will retry after 387.267266ms: waiting for domain to come up
	I1206 08:29:08.375586   10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:08.376137   10525 main.go:143] libmachine: no network interface addresses found for domain addons-618522 (source=lease)
	I1206 08:29:08.376158   10525 main.go:143] libmachine: trying to list again with source=arp
	I1206 08:29:08.376529   10525 main.go:143] libmachine: unable to find current IP address of domain addons-618522 in network mk-addons-618522 (interfaces detected: [])
	I1206 08:29:08.376580   10525 retry.go:31] will retry after 331.631857ms: waiting for domain to come up
	I1206 08:29:08.710026   10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:08.710480   10525 main.go:143] libmachine: no network interface addresses found for domain addons-618522 (source=lease)
	I1206 08:29:08.710494   10525 main.go:143] libmachine: trying to list again with source=arp
	I1206 08:29:08.710731   10525 main.go:143] libmachine: unable to find current IP address of domain addons-618522 in network mk-addons-618522 (interfaces detected: [])
	I1206 08:29:08.710763   10525 retry.go:31] will retry after 523.998005ms: waiting for domain to come up
	I1206 08:29:09.236544   10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:09.237018   10525 main.go:143] libmachine: no network interface addresses found for domain addons-618522 (source=lease)
	I1206 08:29:09.237031   10525 main.go:143] libmachine: trying to list again with source=arp
	I1206 08:29:09.237270   10525 main.go:143] libmachine: unable to find current IP address of domain addons-618522 in network mk-addons-618522 (interfaces detected: [])
	I1206 08:29:09.237299   10525 retry.go:31] will retry after 650.549091ms: waiting for domain to come up
	I1206 08:29:09.889019   10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:09.889513   10525 main.go:143] libmachine: no network interface addresses found for domain addons-618522 (source=lease)
	I1206 08:29:09.889526   10525 main.go:143] libmachine: trying to list again with source=arp
	I1206 08:29:09.889818   10525 main.go:143] libmachine: unable to find current IP address of domain addons-618522 in network mk-addons-618522 (interfaces detected: [])
	I1206 08:29:09.889851   10525 retry.go:31] will retry after 683.637032ms: waiting for domain to come up
	I1206 08:29:10.574615   10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:10.575246   10525 main.go:143] libmachine: no network interface addresses found for domain addons-618522 (source=lease)
	I1206 08:29:10.575261   10525 main.go:143] libmachine: trying to list again with source=arp
	I1206 08:29:10.575593   10525 main.go:143] libmachine: unable to find current IP address of domain addons-618522 in network mk-addons-618522 (interfaces detected: [])
	I1206 08:29:10.575627   10525 retry.go:31] will retry after 1.146917189s: waiting for domain to come up
	I1206 08:29:11.724481   10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:11.724948   10525 main.go:143] libmachine: no network interface addresses found for domain addons-618522 (source=lease)
	I1206 08:29:11.724969   10525 main.go:143] libmachine: trying to list again with source=arp
	I1206 08:29:11.725218   10525 main.go:143] libmachine: unable to find current IP address of domain addons-618522 in network mk-addons-618522 (interfaces detected: [])
	I1206 08:29:11.725254   10525 retry.go:31] will retry after 1.046923271s: waiting for domain to come up
	I1206 08:29:12.773594   10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:12.774131   10525 main.go:143] libmachine: no network interface addresses found for domain addons-618522 (source=lease)
	I1206 08:29:12.774148   10525 main.go:143] libmachine: trying to list again with source=arp
	I1206 08:29:12.774421   10525 main.go:143] libmachine: unable to find current IP address of domain addons-618522 in network mk-addons-618522 (interfaces detected: [])
	I1206 08:29:12.774458   10525 retry.go:31] will retry after 1.269020208s: waiting for domain to come up
	I1206 08:29:14.044811   10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:14.045348   10525 main.go:143] libmachine: no network interface addresses found for domain addons-618522 (source=lease)
	I1206 08:29:14.045364   10525 main.go:143] libmachine: trying to list again with source=arp
	I1206 08:29:14.045622   10525 main.go:143] libmachine: unable to find current IP address of domain addons-618522 in network mk-addons-618522 (interfaces detected: [])
	I1206 08:29:14.045656   10525 retry.go:31] will retry after 1.538945073s: waiting for domain to come up
	I1206 08:29:15.586482   10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:15.587146   10525 main.go:143] libmachine: no network interface addresses found for domain addons-618522 (source=lease)
	I1206 08:29:15.587161   10525 main.go:143] libmachine: trying to list again with source=arp
	I1206 08:29:15.587443   10525 main.go:143] libmachine: unable to find current IP address of domain addons-618522 in network mk-addons-618522 (interfaces detected: [])
	I1206 08:29:15.587502   10525 retry.go:31] will retry after 2.905373773s: waiting for domain to come up
	I1206 08:29:18.496453   10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:18.497022   10525 main.go:143] libmachine: no network interface addresses found for domain addons-618522 (source=lease)
	I1206 08:29:18.497037   10525 main.go:143] libmachine: trying to list again with source=arp
	I1206 08:29:18.497352   10525 main.go:143] libmachine: unable to find current IP address of domain addons-618522 in network mk-addons-618522 (interfaces detected: [])
	I1206 08:29:18.497382   10525 retry.go:31] will retry after 2.524389877s: waiting for domain to come up
	I1206 08:29:21.023815   10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:21.024226   10525 main.go:143] libmachine: no network interface addresses found for domain addons-618522 (source=lease)
	I1206 08:29:21.024238   10525 main.go:143] libmachine: trying to list again with source=arp
	I1206 08:29:21.024516   10525 main.go:143] libmachine: unable to find current IP address of domain addons-618522 in network mk-addons-618522 (interfaces detected: [])
	I1206 08:29:21.024549   10525 retry.go:31] will retry after 3.429567982s: waiting for domain to come up
	I1206 08:29:24.458105   10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:24.458643   10525 main.go:143] libmachine: domain addons-618522 has current primary IP address 192.168.39.168 and MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:24.458659   10525 main.go:143] libmachine: found domain IP: 192.168.39.168
	I1206 08:29:24.458671   10525 main.go:143] libmachine: reserving static IP address...
	I1206 08:29:24.459027   10525 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-618522", mac: "52:54:00:96:93:89", ip: "192.168.39.168"} in network mk-addons-618522
	I1206 08:29:24.643084   10525 main.go:143] libmachine: reserved static IP address 192.168.39.168 for domain addons-618522
	I1206 08:29:24.643103   10525 main.go:143] libmachine: waiting for SSH...
	I1206 08:29:24.643109   10525 main.go:143] libmachine: Getting to WaitForSSH function...
	I1206 08:29:24.645843   10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:24.646325   10525 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:96:93:89", ip: ""} in network mk-addons-618522: {Iface:virbr1 ExpiryTime:2025-12-06 09:29:21 +0000 UTC Type:0 Mac:52:54:00:96:93:89 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:minikube Clientid:01:52:54:00:96:93:89}
	I1206 08:29:24.646350   10525 main.go:143] libmachine: domain addons-618522 has defined IP address 192.168.39.168 and MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:24.646548   10525 main.go:143] libmachine: Using SSH client type: native
	I1206 08:29:24.646796   10525 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I1206 08:29:24.646808   10525 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1206 08:29:24.752319   10525 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 08:29:24.752686   10525 main.go:143] libmachine: domain creation complete
	I1206 08:29:24.754083   10525 machine.go:94] provisionDockerMachine start ...
	I1206 08:29:24.756392   10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:24.756799   10525 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:96:93:89", ip: ""} in network mk-addons-618522: {Iface:virbr1 ExpiryTime:2025-12-06 09:29:21 +0000 UTC Type:0 Mac:52:54:00:96:93:89 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-618522 Clientid:01:52:54:00:96:93:89}
	I1206 08:29:24.756826   10525 main.go:143] libmachine: domain addons-618522 has defined IP address 192.168.39.168 and MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:24.757006   10525 main.go:143] libmachine: Using SSH client type: native
	I1206 08:29:24.757244   10525 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I1206 08:29:24.757258   10525 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 08:29:24.862389   10525 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1206 08:29:24.862416   10525 buildroot.go:166] provisioning hostname "addons-618522"
	I1206 08:29:24.865315   10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:24.865731   10525 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:96:93:89", ip: ""} in network mk-addons-618522: {Iface:virbr1 ExpiryTime:2025-12-06 09:29:21 +0000 UTC Type:0 Mac:52:54:00:96:93:89 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-618522 Clientid:01:52:54:00:96:93:89}
	I1206 08:29:24.865759   10525 main.go:143] libmachine: domain addons-618522 has defined IP address 192.168.39.168 and MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:24.865968   10525 main.go:143] libmachine: Using SSH client type: native
	I1206 08:29:24.866252   10525 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I1206 08:29:24.866270   10525 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-618522 && echo "addons-618522" | sudo tee /etc/hostname
	I1206 08:29:24.990206   10525 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-618522
	
	I1206 08:29:24.993228   10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:24.993648   10525 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:96:93:89", ip: ""} in network mk-addons-618522: {Iface:virbr1 ExpiryTime:2025-12-06 09:29:21 +0000 UTC Type:0 Mac:52:54:00:96:93:89 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-618522 Clientid:01:52:54:00:96:93:89}
	I1206 08:29:24.993676   10525 main.go:143] libmachine: domain addons-618522 has defined IP address 192.168.39.168 and MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:24.993846   10525 main.go:143] libmachine: Using SSH client type: native
	I1206 08:29:24.994072   10525 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I1206 08:29:24.994097   10525 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-618522' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-618522/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-618522' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 08:29:25.109349   10525 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 08:29:25.109375   10525 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22049-5603/.minikube CaCertPath:/home/jenkins/minikube-integration/22049-5603/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22049-5603/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22049-5603/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22049-5603/.minikube}
	I1206 08:29:25.109396   10525 buildroot.go:174] setting up certificates
	I1206 08:29:25.109406   10525 provision.go:84] configureAuth start
	I1206 08:29:25.112095   10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:25.112506   10525 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:96:93:89", ip: ""} in network mk-addons-618522: {Iface:virbr1 ExpiryTime:2025-12-06 09:29:21 +0000 UTC Type:0 Mac:52:54:00:96:93:89 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-618522 Clientid:01:52:54:00:96:93:89}
	I1206 08:29:25.112527   10525 main.go:143] libmachine: domain addons-618522 has defined IP address 192.168.39.168 and MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:25.114758   10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:25.115096   10525 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:96:93:89", ip: ""} in network mk-addons-618522: {Iface:virbr1 ExpiryTime:2025-12-06 09:29:21 +0000 UTC Type:0 Mac:52:54:00:96:93:89 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-618522 Clientid:01:52:54:00:96:93:89}
	I1206 08:29:25.115121   10525 main.go:143] libmachine: domain addons-618522 has defined IP address 192.168.39.168 and MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:25.115325   10525 provision.go:143] copyHostCerts
	I1206 08:29:25.115395   10525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-5603/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22049-5603/.minikube/key.pem (1675 bytes)
	I1206 08:29:25.115566   10525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-5603/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22049-5603/.minikube/ca.pem (1082 bytes)
	I1206 08:29:25.115657   10525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-5603/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22049-5603/.minikube/cert.pem (1123 bytes)
	I1206 08:29:25.115727   10525 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22049-5603/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22049-5603/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22049-5603/.minikube/certs/ca-key.pem org=jenkins.addons-618522 san=[127.0.0.1 192.168.39.168 addons-618522 localhost minikube]
	I1206 08:29:25.171718   10525 provision.go:177] copyRemoteCerts
	I1206 08:29:25.171790   10525 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 08:29:25.174140   10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:25.174486   10525 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:96:93:89", ip: ""} in network mk-addons-618522: {Iface:virbr1 ExpiryTime:2025-12-06 09:29:21 +0000 UTC Type:0 Mac:52:54:00:96:93:89 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-618522 Clientid:01:52:54:00:96:93:89}
	I1206 08:29:25.174512   10525 main.go:143] libmachine: domain addons-618522 has defined IP address 192.168.39.168 and MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:25.174644   10525 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/machines/addons-618522/id_rsa Username:docker}
	I1206 08:29:25.259935   10525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5603/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 08:29:25.292568   10525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5603/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1206 08:29:25.324357   10525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5603/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 08:29:25.356329   10525 provision.go:87] duration metric: took 246.907063ms to configureAuth
	I1206 08:29:25.356390   10525 buildroot.go:189] setting minikube options for container-runtime
	I1206 08:29:25.356576   10525 config.go:182] Loaded profile config "addons-618522": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 08:29:25.359698   10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:25.360097   10525 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:96:93:89", ip: ""} in network mk-addons-618522: {Iface:virbr1 ExpiryTime:2025-12-06 09:29:21 +0000 UTC Type:0 Mac:52:54:00:96:93:89 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-618522 Clientid:01:52:54:00:96:93:89}
	I1206 08:29:25.360124   10525 main.go:143] libmachine: domain addons-618522 has defined IP address 192.168.39.168 and MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:25.360343   10525 main.go:143] libmachine: Using SSH client type: native
	I1206 08:29:25.360552   10525 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I1206 08:29:25.360567   10525 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 08:29:25.598339   10525 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 08:29:25.598372   10525 machine.go:97] duration metric: took 844.27102ms to provisionDockerMachine
	I1206 08:29:25.598386   10525 client.go:176] duration metric: took 19.848491145s to LocalClient.Create
	I1206 08:29:25.598407   10525 start.go:167] duration metric: took 19.848544009s to libmachine.API.Create "addons-618522"
	I1206 08:29:25.598419   10525 start.go:293] postStartSetup for "addons-618522" (driver="kvm2")
	I1206 08:29:25.598433   10525 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 08:29:25.598536   10525 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 08:29:25.601525   10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:25.601870   10525 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:96:93:89", ip: ""} in network mk-addons-618522: {Iface:virbr1 ExpiryTime:2025-12-06 09:29:21 +0000 UTC Type:0 Mac:52:54:00:96:93:89 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-618522 Clientid:01:52:54:00:96:93:89}
	I1206 08:29:25.601893   10525 main.go:143] libmachine: domain addons-618522 has defined IP address 192.168.39.168 and MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:25.602008   10525 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/machines/addons-618522/id_rsa Username:docker}
	I1206 08:29:25.686355   10525 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 08:29:25.691695   10525 info.go:137] Remote host: Buildroot 2025.02
	I1206 08:29:25.691717   10525 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-5603/.minikube/addons for local assets ...
	I1206 08:29:25.691787   10525 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-5603/.minikube/files for local assets ...
	I1206 08:29:25.691810   10525 start.go:296] duration metric: took 93.384984ms for postStartSetup
	I1206 08:29:25.694779   10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:25.695171   10525 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:96:93:89", ip: ""} in network mk-addons-618522: {Iface:virbr1 ExpiryTime:2025-12-06 09:29:21 +0000 UTC Type:0 Mac:52:54:00:96:93:89 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-618522 Clientid:01:52:54:00:96:93:89}
	I1206 08:29:25.695194   10525 main.go:143] libmachine: domain addons-618522 has defined IP address 192.168.39.168 and MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:25.695451   10525 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/config.json ...
	I1206 08:29:25.695673   10525 start.go:128] duration metric: took 19.947368476s to createHost
	I1206 08:29:25.697762   10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:25.698238   10525 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:96:93:89", ip: ""} in network mk-addons-618522: {Iface:virbr1 ExpiryTime:2025-12-06 09:29:21 +0000 UTC Type:0 Mac:52:54:00:96:93:89 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-618522 Clientid:01:52:54:00:96:93:89}
	I1206 08:29:25.698262   10525 main.go:143] libmachine: domain addons-618522 has defined IP address 192.168.39.168 and MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:25.698507   10525 main.go:143] libmachine: Using SSH client type: native
	I1206 08:29:25.698700   10525 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I1206 08:29:25.698711   10525 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1206 08:29:25.804432   10525 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765009765.765707364
	
	I1206 08:29:25.804454   10525 fix.go:216] guest clock: 1765009765.765707364
	I1206 08:29:25.804463   10525 fix.go:229] Guest: 2025-12-06 08:29:25.765707364 +0000 UTC Remote: 2025-12-06 08:29:25.695686605 +0000 UTC m=+20.045537162 (delta=70.020759ms)
	I1206 08:29:25.804509   10525 fix.go:200] guest clock delta is within tolerance: 70.020759ms
	I1206 08:29:25.804516   10525 start.go:83] releasing machines lock for "addons-618522", held for 20.056273909s
	I1206 08:29:25.807260   10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:25.807668   10525 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:96:93:89", ip: ""} in network mk-addons-618522: {Iface:virbr1 ExpiryTime:2025-12-06 09:29:21 +0000 UTC Type:0 Mac:52:54:00:96:93:89 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-618522 Clientid:01:52:54:00:96:93:89}
	I1206 08:29:25.807692   10525 main.go:143] libmachine: domain addons-618522 has defined IP address 192.168.39.168 and MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:25.808209   10525 ssh_runner.go:195] Run: cat /version.json
	I1206 08:29:25.808309   10525 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 08:29:25.811241   10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:25.811455   10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:25.811672   10525 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:96:93:89", ip: ""} in network mk-addons-618522: {Iface:virbr1 ExpiryTime:2025-12-06 09:29:21 +0000 UTC Type:0 Mac:52:54:00:96:93:89 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-618522 Clientid:01:52:54:00:96:93:89}
	I1206 08:29:25.811699   10525 main.go:143] libmachine: domain addons-618522 has defined IP address 192.168.39.168 and MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:25.811849   10525 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:96:93:89", ip: ""} in network mk-addons-618522: {Iface:virbr1 ExpiryTime:2025-12-06 09:29:21 +0000 UTC Type:0 Mac:52:54:00:96:93:89 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-618522 Clientid:01:52:54:00:96:93:89}
	I1206 08:29:25.811851   10525 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/machines/addons-618522/id_rsa Username:docker}
	I1206 08:29:25.811878   10525 main.go:143] libmachine: domain addons-618522 has defined IP address 192.168.39.168 and MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:25.812071   10525 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/machines/addons-618522/id_rsa Username:docker}
	I1206 08:29:25.917122   10525 ssh_runner.go:195] Run: systemctl --version
	I1206 08:29:25.923910   10525 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 08:29:26.085074   10525 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 08:29:26.092288   10525 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 08:29:26.092354   10525 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 08:29:26.113649   10525 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1206 08:29:26.113673   10525 start.go:496] detecting cgroup driver to use...
	I1206 08:29:26.113730   10525 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 08:29:26.133784   10525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 08:29:26.151929   10525 docker.go:218] disabling cri-docker service (if available) ...
	I1206 08:29:26.151994   10525 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 08:29:26.170197   10525 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 08:29:26.187579   10525 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 08:29:26.329201   10525 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 08:29:26.535429   10525 docker.go:234] disabling docker service ...
	I1206 08:29:26.535526   10525 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 08:29:26.552653   10525 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 08:29:26.568392   10525 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 08:29:26.726802   10525 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 08:29:26.871713   10525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 08:29:26.889256   10525 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 08:29:26.913635   10525 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 08:29:26.913710   10525 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 08:29:26.926424   10525 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 08:29:26.926495   10525 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 08:29:26.940063   10525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 08:29:26.952623   10525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 08:29:26.965438   10525 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 08:29:26.979310   10525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 08:29:26.991973   10525 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 08:29:27.014089   10525 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 08:29:27.027675   10525 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 08:29:27.038749   10525 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1206 08:29:27.038822   10525 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1206 08:29:27.063671   10525 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 08:29:27.079524   10525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 08:29:27.223133   10525 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 08:29:27.335179   10525 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 08:29:27.335298   10525 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 08:29:27.341359   10525 start.go:564] Will wait 60s for crictl version
	I1206 08:29:27.341445   10525 ssh_runner.go:195] Run: which crictl
	I1206 08:29:27.345788   10525 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1206 08:29:27.383352   10525 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1206 08:29:27.383504   10525 ssh_runner.go:195] Run: crio --version
	I1206 08:29:27.413774   10525 ssh_runner.go:195] Run: crio --version
	I1206 08:29:27.446797   10525 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1206 08:29:27.450690   10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:27.451086   10525 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:96:93:89", ip: ""} in network mk-addons-618522: {Iface:virbr1 ExpiryTime:2025-12-06 09:29:21 +0000 UTC Type:0 Mac:52:54:00:96:93:89 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-618522 Clientid:01:52:54:00:96:93:89}
	I1206 08:29:27.451114   10525 main.go:143] libmachine: domain addons-618522 has defined IP address 192.168.39.168 and MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:27.451304   10525 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1206 08:29:27.456537   10525 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 08:29:27.472945   10525 kubeadm.go:884] updating cluster {Name:addons-618522 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
2 ClusterName:addons-618522 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.168 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 08:29:27.473098   10525 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 08:29:27.473164   10525 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 08:29:27.505062   10525 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.2". assuming images are not preloaded.
	I1206 08:29:27.505133   10525 ssh_runner.go:195] Run: which lz4
	I1206 08:29:27.509780   10525 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1206 08:29:27.514613   10525 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1206 08:29:27.514652   10525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5603/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340306595 bytes)
	I1206 08:29:28.842414   10525 crio.go:462] duration metric: took 1.332662154s to copy over tarball
	I1206 08:29:28.842504   10525 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1206 08:29:30.428560   10525 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.586023264s)
	I1206 08:29:30.428592   10525 crio.go:469] duration metric: took 1.586151495s to extract the tarball
	I1206 08:29:30.428601   10525 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1206 08:29:30.465345   10525 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 08:29:30.510374   10525 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 08:29:30.510400   10525 cache_images.go:86] Images are preloaded, skipping loading
	I1206 08:29:30.510410   10525 kubeadm.go:935] updating node { 192.168.39.168 8443 v1.34.2 crio true true} ...
	I1206 08:29:30.510524   10525 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-618522 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.168
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-618522 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 08:29:30.510608   10525 ssh_runner.go:195] Run: crio config
	I1206 08:29:30.559183   10525 cni.go:84] Creating CNI manager for ""
	I1206 08:29:30.559207   10525 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 08:29:30.559223   10525 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 08:29:30.559258   10525 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.168 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-618522 NodeName:addons-618522 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.168"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.168 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 08:29:30.559387   10525 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.168
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-618522"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.168"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.168"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 08:29:30.559483   10525 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1206 08:29:30.572367   10525 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 08:29:30.572438   10525 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 08:29:30.585276   10525 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1206 08:29:30.607056   10525 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 08:29:30.628970   10525 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1206 08:29:30.650950   10525 ssh_runner.go:195] Run: grep 192.168.39.168	control-plane.minikube.internal$ /etc/hosts
	I1206 08:29:30.655582   10525 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.168	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 08:29:30.671341   10525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 08:29:30.817744   10525 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 08:29:30.853664   10525 certs.go:69] Setting up /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522 for IP: 192.168.39.168
	I1206 08:29:30.853696   10525 certs.go:195] generating shared ca certs ...
	I1206 08:29:30.853720   10525 certs.go:227] acquiring lock for ca certs: {Name:mk000359972764fead2b3aaf8b843862aa35270c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 08:29:30.853911   10525 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22049-5603/.minikube/ca.key
	I1206 08:29:30.959183   10525 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-5603/.minikube/ca.crt ...
	I1206 08:29:30.959212   10525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5603/.minikube/ca.crt: {Name:mk98d18dd8a6f9e698099692788ea182be89556f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 08:29:30.959385   10525 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-5603/.minikube/ca.key ...
	I1206 08:29:30.959398   10525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5603/.minikube/ca.key: {Name:mk617b4143abd6eb5b699e411431f4c3518e2a8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 08:29:30.959494   10525 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22049-5603/.minikube/proxy-client-ca.key
	I1206 08:29:31.097502   10525 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-5603/.minikube/proxy-client-ca.crt ...
	I1206 08:29:31.097532   10525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5603/.minikube/proxy-client-ca.crt: {Name:mkfc7ab92bbdf62beb6034d33cd4580952764663 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 08:29:31.097713   10525 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-5603/.minikube/proxy-client-ca.key ...
	I1206 08:29:31.097726   10525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5603/.minikube/proxy-client-ca.key: {Name:mk4e124a8a4cadebc0035c7ad9b075cdab45993b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 08:29:31.097807   10525 certs.go:257] generating profile certs ...
	I1206 08:29:31.097866   10525 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/client.key
	I1206 08:29:31.097880   10525 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/client.crt with IP's: []
	I1206 08:29:31.203120   10525 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/client.crt ...
	I1206 08:29:31.203150   10525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/client.crt: {Name:mk3762266801ac43724b8f8cd842b85d6671b320 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 08:29:31.203310   10525 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/client.key ...
	I1206 08:29:31.203321   10525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/client.key: {Name:mk1236cd5229554c01f652817c35695f89a44b70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 08:29:31.203389   10525 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/apiserver.key.32668d86
	I1206 08:29:31.203409   10525 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/apiserver.crt.32668d86 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.168]
	I1206 08:29:31.327479   10525 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/apiserver.crt.32668d86 ...
	I1206 08:29:31.327511   10525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/apiserver.crt.32668d86: {Name:mkd5cf6dfcde218ea513037b7edcd6f8c7a9464c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 08:29:31.327669   10525 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/apiserver.key.32668d86 ...
	I1206 08:29:31.327682   10525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/apiserver.key.32668d86: {Name:mkc0d2d20a6672d311a9a0fedef702fc2d832d50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 08:29:31.327753   10525 certs.go:382] copying /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/apiserver.crt.32668d86 -> /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/apiserver.crt
	I1206 08:29:31.327822   10525 certs.go:386] copying /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/apiserver.key.32668d86 -> /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/apiserver.key
	I1206 08:29:31.327869   10525 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/proxy-client.key
	I1206 08:29:31.327887   10525 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/proxy-client.crt with IP's: []
	I1206 08:29:31.419224   10525 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/proxy-client.crt ...
	I1206 08:29:31.419250   10525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/proxy-client.crt: {Name:mk0c484331172407ea8b520fc091cc8bce5130fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 08:29:31.419411   10525 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/proxy-client.key ...
	I1206 08:29:31.419422   10525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/proxy-client.key: {Name:mkca5c2903e4dbfd94c7024ef1aca11c61796e19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 08:29:31.419617   10525 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5603/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 08:29:31.419655   10525 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5603/.minikube/certs/ca.pem (1082 bytes)
	I1206 08:29:31.419679   10525 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5603/.minikube/certs/cert.pem (1123 bytes)
	I1206 08:29:31.419702   10525 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5603/.minikube/certs/key.pem (1675 bytes)
	I1206 08:29:31.420227   10525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5603/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 08:29:31.452258   10525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5603/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 08:29:31.482562   10525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5603/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 08:29:31.517113   10525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5603/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 08:29:31.553291   10525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1206 08:29:31.586644   10525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 08:29:31.617068   10525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 08:29:31.647513   10525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 08:29:31.677388   10525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5603/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 08:29:31.707416   10525 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 08:29:31.728515   10525 ssh_runner.go:195] Run: openssl version
	I1206 08:29:31.735163   10525 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 08:29:31.747846   10525 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 08:29:31.760277   10525 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 08:29:31.766508   10525 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1206 08:29:31.766576   10525 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 08:29:31.774394   10525 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 08:29:31.786915   10525 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1206 08:29:31.798853   10525 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 08:29:31.803735   10525 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1206 08:29:31.803794   10525 kubeadm.go:401] StartCluster: {Name:addons-618522 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 C
lusterName:addons-618522 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.168 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 08:29:31.803857   10525 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 08:29:31.803898   10525 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 08:29:31.840137   10525 cri.go:89] found id: ""
	I1206 08:29:31.840235   10525 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 08:29:31.853119   10525 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 08:29:31.865576   10525 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 08:29:31.877569   10525 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 08:29:31.877587   10525 kubeadm.go:158] found existing configuration files:
	
	I1206 08:29:31.877633   10525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 08:29:31.888757   10525 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 08:29:31.888815   10525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 08:29:31.900619   10525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 08:29:31.911651   10525 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 08:29:31.911711   10525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 08:29:31.923419   10525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 08:29:31.935211   10525 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 08:29:31.935265   10525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 08:29:31.948207   10525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 08:29:31.960235   10525 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 08:29:31.960286   10525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 08:29:31.973301   10525 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1206 08:29:32.122282   10525 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 08:29:43.999988   10525 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1206 08:29:44.000093   10525 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 08:29:44.000175   10525 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 08:29:44.000345   10525 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 08:29:44.000521   10525 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 08:29:44.000616   10525 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 08:29:44.003486   10525 out.go:252]   - Generating certificates and keys ...
	I1206 08:29:44.003585   10525 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 08:29:44.003669   10525 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 08:29:44.003804   10525 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 08:29:44.003899   10525 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1206 08:29:44.003982   10525 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1206 08:29:44.004054   10525 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1206 08:29:44.004138   10525 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1206 08:29:44.004286   10525 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-618522 localhost] and IPs [192.168.39.168 127.0.0.1 ::1]
	I1206 08:29:44.004358   10525 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1206 08:29:44.004503   10525 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-618522 localhost] and IPs [192.168.39.168 127.0.0.1 ::1]
	I1206 08:29:44.004593   10525 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 08:29:44.004680   10525 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 08:29:44.004743   10525 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1206 08:29:44.004845   10525 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 08:29:44.004953   10525 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 08:29:44.005038   10525 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 08:29:44.005124   10525 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 08:29:44.005214   10525 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 08:29:44.005292   10525 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 08:29:44.005400   10525 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 08:29:44.005504   10525 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 08:29:44.006803   10525 out.go:252]   - Booting up control plane ...
	I1206 08:29:44.006890   10525 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 08:29:44.006975   10525 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 08:29:44.007058   10525 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 08:29:44.007171   10525 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 08:29:44.007291   10525 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 08:29:44.007418   10525 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 08:29:44.007520   10525 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 08:29:44.007561   10525 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 08:29:44.007667   10525 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 08:29:44.007771   10525 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 08:29:44.007826   10525 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 502.821404ms
	I1206 08:29:44.007905   10525 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1206 08:29:44.007970   10525 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.168:8443/livez
	I1206 08:29:44.008039   10525 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1206 08:29:44.008108   10525 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1206 08:29:44.008183   10525 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.836142283s
	I1206 08:29:44.008254   10525 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.06492022s
	I1206 08:29:44.008320   10525 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.001600328s
	I1206 08:29:44.008496   10525 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1206 08:29:44.008659   10525 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1206 08:29:44.008708   10525 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1206 08:29:44.008921   10525 kubeadm.go:319] [mark-control-plane] Marking the node addons-618522 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1206 08:29:44.008983   10525 kubeadm.go:319] [bootstrap-token] Using token: 2rgaqd.9q3qr2oogpfcg4aj
	I1206 08:29:44.010299   10525 out.go:252]   - Configuring RBAC rules ...
	I1206 08:29:44.010403   10525 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1206 08:29:44.010494   10525 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1206 08:29:44.010614   10525 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1206 08:29:44.010742   10525 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1206 08:29:44.010907   10525 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1206 08:29:44.010996   10525 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1206 08:29:44.011145   10525 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1206 08:29:44.011215   10525 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1206 08:29:44.011289   10525 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1206 08:29:44.011311   10525 kubeadm.go:319] 
	I1206 08:29:44.011396   10525 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1206 08:29:44.011405   10525 kubeadm.go:319] 
	I1206 08:29:44.011520   10525 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1206 08:29:44.011530   10525 kubeadm.go:319] 
	I1206 08:29:44.011551   10525 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1206 08:29:44.011600   10525 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1206 08:29:44.011648   10525 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1206 08:29:44.011654   10525 kubeadm.go:319] 
	I1206 08:29:44.011698   10525 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1206 08:29:44.011704   10525 kubeadm.go:319] 
	I1206 08:29:44.011743   10525 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1206 08:29:44.011749   10525 kubeadm.go:319] 
	I1206 08:29:44.011791   10525 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1206 08:29:44.011906   10525 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1206 08:29:44.011973   10525 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1206 08:29:44.011979   10525 kubeadm.go:319] 
	I1206 08:29:44.012060   10525 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1206 08:29:44.012130   10525 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1206 08:29:44.012139   10525 kubeadm.go:319] 
	I1206 08:29:44.012208   10525 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 2rgaqd.9q3qr2oogpfcg4aj \
	I1206 08:29:44.012303   10525 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2d17d07b3ca8c174fceaa58ec10b5dce3bfd9799b90057e73686cf2c9f9f3441 \
	I1206 08:29:44.012323   10525 kubeadm.go:319] 	--control-plane 
	I1206 08:29:44.012327   10525 kubeadm.go:319] 
	I1206 08:29:44.012413   10525 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1206 08:29:44.012424   10525 kubeadm.go:319] 
	I1206 08:29:44.012539   10525 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 2rgaqd.9q3qr2oogpfcg4aj \
	I1206 08:29:44.012676   10525 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2d17d07b3ca8c174fceaa58ec10b5dce3bfd9799b90057e73686cf2c9f9f3441 
	I1206 08:29:44.012691   10525 cni.go:84] Creating CNI manager for ""
	I1206 08:29:44.012701   10525 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 08:29:44.014013   10525 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1206 08:29:44.015128   10525 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1206 08:29:44.028452   10525 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1206 08:29:44.055027   10525 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1206 08:29:44.055119   10525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 08:29:44.055150   10525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-618522 minikube.k8s.io/updated_at=2025_12_06T08_29_44_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9c863e42b877bb840aec81dfcdcbf173a0ac5fb9 minikube.k8s.io/name=addons-618522 minikube.k8s.io/primary=true
	I1206 08:29:44.117986   10525 ops.go:34] apiserver oom_adj: -16
	I1206 08:29:44.191458   10525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 08:29:44.692166   10525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 08:29:45.191843   10525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 08:29:45.692249   10525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 08:29:46.191601   10525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 08:29:46.691740   10525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 08:29:47.192125   10525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 08:29:47.691513   10525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 08:29:48.192570   10525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 08:29:48.691713   10525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 08:29:48.793938   10525 kubeadm.go:1114] duration metric: took 4.738882027s to wait for elevateKubeSystemPrivileges
	I1206 08:29:48.793973   10525 kubeadm.go:403] duration metric: took 16.99018465s to StartCluster
	I1206 08:29:48.793995   10525 settings.go:142] acquiring lock: {Name:mk1c4376642fa0e1442961c9690dcfd3d7346ba2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 08:29:48.794447   10525 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22049-5603/kubeconfig
	I1206 08:29:48.795027   10525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5603/kubeconfig: {Name:mk8c42c505f5f7f0ebf46166194656af7c5589e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 08:29:48.795263   10525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 08:29:48.795351   10525 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.168 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 08:29:48.795425   10525 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1206 08:29:48.795586   10525 addons.go:70] Setting yakd=true in profile "addons-618522"
	I1206 08:29:48.795588   10525 addons.go:70] Setting cloud-spanner=true in profile "addons-618522"
	I1206 08:29:48.795613   10525 addons.go:239] Setting addon yakd=true in "addons-618522"
	I1206 08:29:48.795628   10525 addons.go:239] Setting addon cloud-spanner=true in "addons-618522"
	I1206 08:29:48.795618   10525 addons.go:70] Setting metrics-server=true in profile "addons-618522"
	I1206 08:29:48.795647   10525 host.go:66] Checking if "addons-618522" exists ...
	I1206 08:29:48.795655   10525 config.go:182] Loaded profile config "addons-618522": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 08:29:48.795659   10525 addons.go:239] Setting addon metrics-server=true in "addons-618522"
	I1206 08:29:48.795645   10525 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-618522"
	I1206 08:29:48.795687   10525 host.go:66] Checking if "addons-618522" exists ...
	I1206 08:29:48.795697   10525 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-618522"
	I1206 08:29:48.795697   10525 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-618522"
	I1206 08:29:48.795713   10525 host.go:66] Checking if "addons-618522" exists ...
	I1206 08:29:48.795730   10525 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-618522"
	I1206 08:29:48.795745   10525 host.go:66] Checking if "addons-618522" exists ...
	I1206 08:29:48.795751   10525 host.go:66] Checking if "addons-618522" exists ...
	I1206 08:29:48.796244   10525 addons.go:70] Setting gcp-auth=true in profile "addons-618522"
	I1206 08:29:48.796294   10525 mustload.go:66] Loading cluster: addons-618522
	I1206 08:29:48.796337   10525 addons.go:70] Setting ingress-dns=true in profile "addons-618522"
	I1206 08:29:48.796371   10525 addons.go:239] Setting addon ingress-dns=true in "addons-618522"
	I1206 08:29:48.796373   10525 addons.go:70] Setting ingress=true in profile "addons-618522"
	I1206 08:29:48.796405   10525 addons.go:239] Setting addon ingress=true in "addons-618522"
	I1206 08:29:48.796406   10525 host.go:66] Checking if "addons-618522" exists ...
	I1206 08:29:48.796434   10525 host.go:66] Checking if "addons-618522" exists ...
	I1206 08:29:48.796570   10525 config.go:182] Loaded profile config "addons-618522": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 08:29:48.797202   10525 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-618522"
	I1206 08:29:48.797232   10525 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-618522"
	I1206 08:29:48.797260   10525 host.go:66] Checking if "addons-618522" exists ...
	I1206 08:29:48.797320   10525 addons.go:70] Setting storage-provisioner=true in profile "addons-618522"
	I1206 08:29:48.797341   10525 addons.go:239] Setting addon storage-provisioner=true in "addons-618522"
	I1206 08:29:48.797366   10525 host.go:66] Checking if "addons-618522" exists ...
	I1206 08:29:48.797508   10525 out.go:179] * Verifying Kubernetes components...
	I1206 08:29:48.797598   10525 addons.go:70] Setting inspektor-gadget=true in profile "addons-618522"
	I1206 08:29:48.797621   10525 addons.go:239] Setting addon inspektor-gadget=true in "addons-618522"
	I1206 08:29:48.797654   10525 host.go:66] Checking if "addons-618522" exists ...
	I1206 08:29:48.797715   10525 addons.go:70] Setting volcano=true in profile "addons-618522"
	I1206 08:29:48.797730   10525 addons.go:239] Setting addon volcano=true in "addons-618522"
	I1206 08:29:48.797751   10525 host.go:66] Checking if "addons-618522" exists ...
	I1206 08:29:48.797876   10525 addons.go:70] Setting default-storageclass=true in profile "addons-618522"
	I1206 08:29:48.797896   10525 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-618522"
	I1206 08:29:48.797922   10525 addons.go:70] Setting volumesnapshots=true in profile "addons-618522"
	I1206 08:29:48.797936   10525 addons.go:239] Setting addon volumesnapshots=true in "addons-618522"
	I1206 08:29:48.797977   10525 host.go:66] Checking if "addons-618522" exists ...
	I1206 08:29:48.798187   10525 addons.go:70] Setting registry=true in profile "addons-618522"
	I1206 08:29:48.798209   10525 addons.go:239] Setting addon registry=true in "addons-618522"
	I1206 08:29:48.798232   10525 host.go:66] Checking if "addons-618522" exists ...
	I1206 08:29:48.798262   10525 addons.go:70] Setting registry-creds=true in profile "addons-618522"
	I1206 08:29:48.798276   10525 addons.go:239] Setting addon registry-creds=true in "addons-618522"
	I1206 08:29:48.798294   10525 host.go:66] Checking if "addons-618522" exists ...
	I1206 08:29:48.798395   10525 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-618522"
	I1206 08:29:48.798428   10525 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-618522"
	I1206 08:29:48.799297   10525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 08:29:48.802916   10525 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1206 08:29:48.802916   10525 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1206 08:29:48.803885   10525 host.go:66] Checking if "addons-618522" exists ...
	I1206 08:29:48.804408   10525 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1206 08:29:48.804411   10525 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1206 08:29:48.804414   10525 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1206 08:29:48.804982   10525 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1206 08:29:48.805168   10525 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1206 08:29:48.804457   10525 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1206 08:29:48.805655   10525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1206 08:29:48.805986   10525 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1206 08:29:48.806002   10525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1206 08:29:48.806637   10525 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	W1206 08:29:48.806961   10525 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1206 08:29:48.807295   10525 addons.go:239] Setting addon default-storageclass=true in "addons-618522"
	I1206 08:29:48.807338   10525 host.go:66] Checking if "addons-618522" exists ...
	I1206 08:29:48.807578   10525 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 08:29:48.807622   10525 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1206 08:29:48.807641   10525 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1206 08:29:48.807655   10525 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1206 08:29:48.807703   10525 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1206 08:29:48.808650   10525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1206 08:29:48.808382   10525 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1206 08:29:48.808384   10525 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1206 08:29:48.809058   10525 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-618522"
	I1206 08:29:48.809783   10525 host.go:66] Checking if "addons-618522" exists ...
	I1206 08:29:48.809378   10525 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 08:29:48.809945   10525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 08:29:48.809382   10525 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1206 08:29:48.810038   10525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1206 08:29:48.810068   10525 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1206 08:29:48.810083   10525 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1206 08:29:48.809389   10525 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1206 08:29:48.810283   10525 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1206 08:29:48.811241   10525 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1206 08:29:48.811251   10525 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1206 08:29:48.811257   10525 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1206 08:29:48.811270   10525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1206 08:29:48.811293   10525 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1206 08:29:48.811342   10525 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1206 08:29:48.811248   10525 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1206 08:29:48.811371   10525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1206 08:29:48.811577   10525 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 08:29:48.811595   10525 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 08:29:48.812244   10525 out.go:179]   - Using image docker.io/registry:3.0.0
	I1206 08:29:48.813292   10525 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1206 08:29:48.813304   10525 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1206 08:29:48.813369   10525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1206 08:29:48.813303   10525 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1206 08:29:48.813803   10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:48.814551   10525 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1206 08:29:48.814576   10525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1206 08:29:48.815133   10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:48.815313   10525 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1206 08:29:48.815766   10525 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:96:93:89", ip: ""} in network mk-addons-618522: {Iface:virbr1 ExpiryTime:2025-12-06 09:29:21 +0000 UTC Type:0 Mac:52:54:00:96:93:89 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-618522 Clientid:01:52:54:00:96:93:89}
	I1206 08:29:48.815796   10525 main.go:143] libmachine: domain addons-618522 has defined IP address 192.168.39.168 and MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:48.816557   10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:48.816951   10525 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/machines/addons-618522/id_rsa Username:docker}
	I1206 08:29:48.817607   10525 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1206 08:29:48.817688   10525 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:96:93:89", ip: ""} in network mk-addons-618522: {Iface:virbr1 ExpiryTime:2025-12-06 09:29:21 +0000 UTC Type:0 Mac:52:54:00:96:93:89 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-618522 Clientid:01:52:54:00:96:93:89}
	I1206 08:29:48.817932   10525 main.go:143] libmachine: domain addons-618522 has defined IP address 192.168.39.168 and MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:48.818251   10525 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1206 08:29:48.818587   10525 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/machines/addons-618522/id_rsa Username:docker}
	I1206 08:29:48.818368   10525 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:96:93:89", ip: ""} in network mk-addons-618522: {Iface:virbr1 ExpiryTime:2025-12-06 09:29:21 +0000 UTC Type:0 Mac:52:54:00:96:93:89 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-618522 Clientid:01:52:54:00:96:93:89}
	I1206 08:29:48.818641   10525 main.go:143] libmachine: domain addons-618522 has defined IP address 192.168.39.168 and MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:48.819347   10525 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/machines/addons-618522/id_rsa Username:docker}
	I1206 08:29:48.819529   10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:48.820250   10525 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1206 08:29:48.820708   10525 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:96:93:89", ip: ""} in network mk-addons-618522: {Iface:virbr1 ExpiryTime:2025-12-06 09:29:21 +0000 UTC Type:0 Mac:52:54:00:96:93:89 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-618522 Clientid:01:52:54:00:96:93:89}
	I1206 08:29:48.820736   10525 main.go:143] libmachine: domain addons-618522 has defined IP address 192.168.39.168 and MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:48.821077   10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:48.821087   10525 out.go:179]   - Using image docker.io/busybox:stable
	I1206 08:29:48.821679   10525 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/machines/addons-618522/id_rsa Username:docker}
	I1206 08:29:48.821985   10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:48.822649   10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:48.822789   10525 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:96:93:89", ip: ""} in network mk-addons-618522: {Iface:virbr1 ExpiryTime:2025-12-06 09:29:21 +0000 UTC Type:0 Mac:52:54:00:96:93:89 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-618522 Clientid:01:52:54:00:96:93:89}
	I1206 08:29:48.822819   10525 main.go:143] libmachine: domain addons-618522 has defined IP address 192.168.39.168 and MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:48.822958   10525 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:96:93:89", ip: ""} in network mk-addons-618522: {Iface:virbr1 ExpiryTime:2025-12-06 09:29:21 +0000 UTC Type:0 Mac:52:54:00:96:93:89 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-618522 Clientid:01:52:54:00:96:93:89}
	I1206 08:29:48.822988   10525 main.go:143] libmachine: domain addons-618522 has defined IP address 192.168.39.168 and MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:48.823015   10525 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1206 08:29:48.823026   10525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1206 08:29:48.823185   10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:48.823428   10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:48.823451   10525 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/machines/addons-618522/id_rsa Username:docker}
	I1206 08:29:48.823551   10525 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1206 08:29:48.823743   10525 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/machines/addons-618522/id_rsa Username:docker}
	I1206 08:29:48.823760   10525 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:96:93:89", ip: ""} in network mk-addons-618522: {Iface:virbr1 ExpiryTime:2025-12-06 09:29:21 +0000 UTC Type:0 Mac:52:54:00:96:93:89 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-618522 Clientid:01:52:54:00:96:93:89}
	I1206 08:29:48.823808   10525 main.go:143] libmachine: domain addons-618522 has defined IP address 192.168.39.168 and MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:48.823906   10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:48.824257   10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:48.824339   10525 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/machines/addons-618522/id_rsa Username:docker}
	I1206 08:29:48.824607   10525 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1206 08:29:48.824638   10525 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1206 08:29:48.824851   10525 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:96:93:89", ip: ""} in network mk-addons-618522: {Iface:virbr1 ExpiryTime:2025-12-06 09:29:21 +0000 UTC Type:0 Mac:52:54:00:96:93:89 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-618522 Clientid:01:52:54:00:96:93:89}
	I1206 08:29:48.824877   10525 main.go:143] libmachine: domain addons-618522 has defined IP address 192.168.39.168 and MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:48.824931   10525 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:96:93:89", ip: ""} in network mk-addons-618522: {Iface:virbr1 ExpiryTime:2025-12-06 09:29:21 +0000 UTC Type:0 Mac:52:54:00:96:93:89 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-618522 Clientid:01:52:54:00:96:93:89}
	I1206 08:29:48.824961   10525 main.go:143] libmachine: domain addons-618522 has defined IP address 192.168.39.168 and MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:48.825029   10525 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:96:93:89", ip: ""} in network mk-addons-618522: {Iface:virbr1 ExpiryTime:2025-12-06 09:29:21 +0000 UTC Type:0 Mac:52:54:00:96:93:89 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-618522 Clientid:01:52:54:00:96:93:89}
	I1206 08:29:48.825065   10525 main.go:143] libmachine: domain addons-618522 has defined IP address 192.168.39.168 and MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:48.825198   10525 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/machines/addons-618522/id_rsa Username:docker}
	I1206 08:29:48.825246   10525 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:96:93:89", ip: ""} in network mk-addons-618522: {Iface:virbr1 ExpiryTime:2025-12-06 09:29:21 +0000 UTC Type:0 Mac:52:54:00:96:93:89 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-618522 Clientid:01:52:54:00:96:93:89}
	I1206 08:29:48.825275   10525 main.go:143] libmachine: domain addons-618522 has defined IP address 192.168.39.168 and MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:48.825325   10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:48.825607   10525 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/machines/addons-618522/id_rsa Username:docker}
	I1206 08:29:48.825644   10525 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/machines/addons-618522/id_rsa Username:docker}
	I1206 08:29:48.825700   10525 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/machines/addons-618522/id_rsa Username:docker}
	I1206 08:29:48.825978   10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:48.826811   10525 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:96:93:89", ip: ""} in network mk-addons-618522: {Iface:virbr1 ExpiryTime:2025-12-06 09:29:21 +0000 UTC Type:0 Mac:52:54:00:96:93:89 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-618522 Clientid:01:52:54:00:96:93:89}
	I1206 08:29:48.826849   10525 main.go:143] libmachine: domain addons-618522 has defined IP address 192.168.39.168 and MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:48.826936   10525 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:96:93:89", ip: ""} in network mk-addons-618522: {Iface:virbr1 ExpiryTime:2025-12-06 09:29:21 +0000 UTC Type:0 Mac:52:54:00:96:93:89 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-618522 Clientid:01:52:54:00:96:93:89}
	I1206 08:29:48.826966   10525 main.go:143] libmachine: domain addons-618522 has defined IP address 192.168.39.168 and MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:48.827069   10525 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/machines/addons-618522/id_rsa Username:docker}
	I1206 08:29:48.827406   10525 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/machines/addons-618522/id_rsa Username:docker}
	I1206 08:29:48.829578   10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:48.829722   10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:48.829982   10525 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:96:93:89", ip: ""} in network mk-addons-618522: {Iface:virbr1 ExpiryTime:2025-12-06 09:29:21 +0000 UTC Type:0 Mac:52:54:00:96:93:89 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-618522 Clientid:01:52:54:00:96:93:89}
	I1206 08:29:48.830013   10525 main.go:143] libmachine: domain addons-618522 has defined IP address 192.168.39.168 and MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:48.830082   10525 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:96:93:89", ip: ""} in network mk-addons-618522: {Iface:virbr1 ExpiryTime:2025-12-06 09:29:21 +0000 UTC Type:0 Mac:52:54:00:96:93:89 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-618522 Clientid:01:52:54:00:96:93:89}
	I1206 08:29:48.830112   10525 main.go:143] libmachine: domain addons-618522 has defined IP address 192.168.39.168 and MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:48.830157   10525 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/machines/addons-618522/id_rsa Username:docker}
	I1206 08:29:48.830379   10525 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/machines/addons-618522/id_rsa Username:docker}
	W1206 08:29:49.088159   10525 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:47152->192.168.39.168:22: read: connection reset by peer
	I1206 08:29:49.088194   10525 retry.go:31] will retry after 169.45351ms: ssh: handshake failed: read tcp 192.168.39.1:47152->192.168.39.168:22: read: connection reset by peer
	W1206 08:29:49.141693   10525 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:47172->192.168.39.168:22: read: connection reset by peer
	I1206 08:29:49.141724   10525 retry.go:31] will retry after 264.37626ms: ssh: handshake failed: read tcp 192.168.39.1:47172->192.168.39.168:22: read: connection reset by peer
	I1206 08:29:49.285513   10525 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 08:29:49.285575   10525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1206 08:29:49.759226   10525 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1206 08:29:49.759267   10525 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1206 08:29:49.807120   10525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1206 08:29:49.835568   10525 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1206 08:29:49.835598   10525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1206 08:29:49.872330   10525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1206 08:29:49.883624   10525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 08:29:49.896992   10525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1206 08:29:49.958970   10525 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1206 08:29:49.959004   10525 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1206 08:29:49.964200   10525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1206 08:29:50.038529   10525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1206 08:29:50.042668   10525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 08:29:50.047585   10525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1206 08:29:50.048081   10525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1206 08:29:50.066477   10525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1206 08:29:50.186790   10525 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1206 08:29:50.186815   10525 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1206 08:29:50.309536   10525 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1206 08:29:50.309561   10525 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1206 08:29:50.514315   10525 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1206 08:29:50.514341   10525 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1206 08:29:50.515009   10525 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1206 08:29:50.515025   10525 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1206 08:29:50.746561   10525 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1206 08:29:50.746586   10525 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1206 08:29:50.886154   10525 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1206 08:29:50.886177   10525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1206 08:29:50.925022   10525 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1206 08:29:50.925051   10525 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1206 08:29:50.963173   10525 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1206 08:29:50.963198   10525 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1206 08:29:50.970820   10525 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1206 08:29:50.970842   10525 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1206 08:29:51.140333   10525 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1206 08:29:51.140356   10525 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1206 08:29:51.263612   10525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1206 08:29:51.268230   10525 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1206 08:29:51.268261   10525 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1206 08:29:51.296996   10525 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1206 08:29:51.297016   10525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1206 08:29:51.484195   10525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1206 08:29:51.732851   10525 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1206 08:29:51.732876   10525 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1206 08:29:51.779554   10525 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1206 08:29:51.779580   10525 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1206 08:29:51.842287   10525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1206 08:29:52.078856   10525 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1206 08:29:52.078884   10525 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1206 08:29:52.105790   10525 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1206 08:29:52.105816   10525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1206 08:29:52.629192   10525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1206 08:29:52.640261   10525 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1206 08:29:52.640283   10525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1206 08:29:52.867462   10525 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.581916645s)
	I1206 08:29:52.867452   10525 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.581838874s)
	I1206 08:29:52.867559   10525 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1206 08:29:52.868103   10525 node_ready.go:35] waiting up to 6m0s for node "addons-618522" to be "Ready" ...
	I1206 08:29:52.881190   10525 node_ready.go:49] node "addons-618522" is "Ready"
	I1206 08:29:52.881226   10525 node_ready.go:38] duration metric: took 13.099432ms for node "addons-618522" to be "Ready" ...
	I1206 08:29:52.881241   10525 api_server.go:52] waiting for apiserver process to appear ...
	I1206 08:29:52.881302   10525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:29:53.260520   10525 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1206 08:29:53.260547   10525 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1206 08:29:53.375634   10525 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-618522" context rescaled to 1 replicas
	I1206 08:29:53.751013   10525 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1206 08:29:53.751038   10525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1206 08:29:54.069729   10525 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1206 08:29:54.069750   10525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1206 08:29:54.277752   10525 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1206 08:29:54.277774   10525 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1206 08:29:54.550958   10525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1206 08:29:56.156333   10525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (6.349175006s)
	I1206 08:29:56.256152   10525 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1206 08:29:56.258960   10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:56.259407   10525 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:96:93:89", ip: ""} in network mk-addons-618522: {Iface:virbr1 ExpiryTime:2025-12-06 09:29:21 +0000 UTC Type:0 Mac:52:54:00:96:93:89 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-618522 Clientid:01:52:54:00:96:93:89}
	I1206 08:29:56.259434   10525 main.go:143] libmachine: domain addons-618522 has defined IP address 192.168.39.168 and MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:56.259596   10525 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/machines/addons-618522/id_rsa Username:docker}
	I1206 08:29:56.664839   10525 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1206 08:29:56.826326   10525 addons.go:239] Setting addon gcp-auth=true in "addons-618522"
	I1206 08:29:56.826376   10525 host.go:66] Checking if "addons-618522" exists ...
	I1206 08:29:56.828233   10525 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1206 08:29:56.830118   10525 main.go:143] libmachine: domain addons-618522 has defined MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:56.830476   10525 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:96:93:89", ip: ""} in network mk-addons-618522: {Iface:virbr1 ExpiryTime:2025-12-06 09:29:21 +0000 UTC Type:0 Mac:52:54:00:96:93:89 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:addons-618522 Clientid:01:52:54:00:96:93:89}
	I1206 08:29:56.830499   10525 main.go:143] libmachine: domain addons-618522 has defined IP address 192.168.39.168 and MAC address 52:54:00:96:93:89 in network mk-addons-618522
	I1206 08:29:56.830694   10525 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/machines/addons-618522/id_rsa Username:docker}
	I1206 08:29:58.047720   10525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.175341556s)
	I1206 08:29:58.047755   10525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.164103792s)
	I1206 08:29:58.047767   10525 addons.go:495] Verifying addon ingress=true in "addons-618522"
	I1206 08:29:58.047883   10525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.150860316s)
	I1206 08:29:58.047997   10525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (8.083767565s)
	I1206 08:29:58.048018   10525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.009458228s)
	I1206 08:29:58.048092   10525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.005398552s)
	I1206 08:29:58.048117   10525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.000509685s)
	I1206 08:29:58.048163   10525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.000059127s)
	I1206 08:29:58.048175   10525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (7.981677743s)
	I1206 08:29:58.048219   10525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.784582438s)
	I1206 08:29:58.048246   10525 addons.go:495] Verifying addon registry=true in "addons-618522"
	I1206 08:29:58.048277   10525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.564060056s)
	I1206 08:29:58.048307   10525 addons.go:495] Verifying addon metrics-server=true in "addons-618522"
	I1206 08:29:58.048365   10525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.206051037s)
	I1206 08:29:58.049707   10525 out.go:179] * Verifying ingress addon...
	I1206 08:29:58.050261   10525 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-618522 service yakd-dashboard -n yakd-dashboard
	
	I1206 08:29:58.050265   10525 out.go:179] * Verifying registry addon...
	I1206 08:29:58.051458   10525 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1206 08:29:58.052211   10525 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1206 08:29:58.089198   10525 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1206 08:29:58.089224   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:58.089290   10525 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1206 08:29:58.089303   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1206 08:29:58.102364   10525 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1206 08:29:58.160109   10525 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (5.27878297s)
	I1206 08:29:58.160139   10525 api_server.go:72] duration metric: took 9.364753933s to wait for apiserver process to appear ...
	I1206 08:29:58.160146   10525 api_server.go:88] waiting for apiserver healthz status ...
	I1206 08:29:58.160167   10525 api_server.go:253] Checking apiserver healthz at https://192.168.39.168:8443/healthz ...
	I1206 08:29:58.160187   10525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.530952988s)
	W1206 08:29:58.160236   10525 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1206 08:29:58.160274   10525 retry.go:31] will retry after 331.684035ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1206 08:29:58.185232   10525 api_server.go:279] https://192.168.39.168:8443/healthz returned 200:
	ok
	I1206 08:29:58.186291   10525 api_server.go:141] control plane version: v1.34.2
	I1206 08:29:58.186317   10525 api_server.go:131] duration metric: took 26.163365ms to wait for apiserver health ...
	I1206 08:29:58.186330   10525 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 08:29:58.219149   10525 system_pods.go:59] 16 kube-system pods found
	I1206 08:29:58.219197   10525 system_pods.go:61] "amd-gpu-device-plugin-2k5hq" [c5883664-cfdc-4af0-8f2c-6404a2eb83dd] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1206 08:29:58.219211   10525 system_pods.go:61] "coredns-66bc5c9577-7c7k7" [fb10465b-d4eb-4157-8fba-f9ecee814344] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 08:29:58.219221   10525 system_pods.go:61] "coredns-66bc5c9577-n5nl7" [d09b0bf4-9d8e-49d4-a96e-c0c0e841abaf] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 08:29:58.219231   10525 system_pods.go:61] "etcd-addons-618522" [c6f9a8f5-e31d-49b3-bccd-4bcfa6772584] Running
	I1206 08:29:58.219239   10525 system_pods.go:61] "kube-apiserver-addons-618522" [5cdad140-9557-499c-a8ba-9cd6abd57a66] Running
	I1206 08:29:58.219246   10525 system_pods.go:61] "kube-controller-manager-addons-618522" [92e42c76-1eb2-4ba2-9888-7db8e39e1efa] Running
	I1206 08:29:58.219266   10525 system_pods.go:61] "kube-ingress-dns-minikube" [96c41d37-7317-4033-b500-9fcd4e3ea24b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1206 08:29:58.219275   10525 system_pods.go:61] "kube-proxy-g62jv" [2dc778d5-5fb1-4e20-be27-75b606e19155] Running
	I1206 08:29:58.219279   10525 system_pods.go:61] "kube-scheduler-addons-618522" [56dfd1ed-e4ab-4bdc-834f-02de7b30036d] Running
	I1206 08:29:58.219287   10525 system_pods.go:61] "metrics-server-85b7d694d7-9tv6q" [1acee34d-7cc9-4f91-81a5-5af04cf36b68] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 08:29:58.219298   10525 system_pods.go:61] "nvidia-device-plugin-daemonset-mgdnq" [ba7d5636-4bd4-4737-a2f4-8b93aadfc08d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1206 08:29:58.219308   10525 system_pods.go:61] "registry-6b586f9694-45g8h" [9bf3de1f-8c67-4f56-8ed4-4820b8abc96d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1206 08:29:58.219319   10525 system_pods.go:61] "registry-creds-764b6fb674-qgdbz" [597094a8-35c3-4f4c-b160-93e5d951bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1206 08:29:58.219328   10525 system_pods.go:61] "registry-proxy-nj49l" [6b459c6d-2dff-4d22-afc5-16895571af55] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1206 08:29:58.219335   10525 system_pods.go:61] "snapshot-controller-7d9fbc56b8-5scnk" [7d011425-02ab-4c8a-b267-36e33db2790d] Pending
	I1206 08:29:58.219347   10525 system_pods.go:61] "storage-provisioner" [db8e1388-2d9d-4022-afb8-cd29b3ab2d3a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 08:29:58.219356   10525 system_pods.go:74] duration metric: took 33.018079ms to wait for pod list to return data ...
	I1206 08:29:58.219372   10525 default_sa.go:34] waiting for default service account to be created ...
	I1206 08:29:58.251987   10525 default_sa.go:45] found service account: "default"
	I1206 08:29:58.252015   10525 default_sa.go:55] duration metric: took 32.635563ms for default service account to be created ...
	I1206 08:29:58.252026   10525 system_pods.go:116] waiting for k8s-apps to be running ...
	I1206 08:29:58.332539   10525 system_pods.go:86] 17 kube-system pods found
	I1206 08:29:58.332580   10525 system_pods.go:89] "amd-gpu-device-plugin-2k5hq" [c5883664-cfdc-4af0-8f2c-6404a2eb83dd] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1206 08:29:58.332590   10525 system_pods.go:89] "coredns-66bc5c9577-7c7k7" [fb10465b-d4eb-4157-8fba-f9ecee814344] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 08:29:58.332602   10525 system_pods.go:89] "coredns-66bc5c9577-n5nl7" [d09b0bf4-9d8e-49d4-a96e-c0c0e841abaf] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 08:29:58.332615   10525 system_pods.go:89] "etcd-addons-618522" [c6f9a8f5-e31d-49b3-bccd-4bcfa6772584] Running
	I1206 08:29:58.332621   10525 system_pods.go:89] "kube-apiserver-addons-618522" [5cdad140-9557-499c-a8ba-9cd6abd57a66] Running
	I1206 08:29:58.332626   10525 system_pods.go:89] "kube-controller-manager-addons-618522" [92e42c76-1eb2-4ba2-9888-7db8e39e1efa] Running
	I1206 08:29:58.332638   10525 system_pods.go:89] "kube-ingress-dns-minikube" [96c41d37-7317-4033-b500-9fcd4e3ea24b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1206 08:29:58.332643   10525 system_pods.go:89] "kube-proxy-g62jv" [2dc778d5-5fb1-4e20-be27-75b606e19155] Running
	I1206 08:29:58.332650   10525 system_pods.go:89] "kube-scheduler-addons-618522" [56dfd1ed-e4ab-4bdc-834f-02de7b30036d] Running
	I1206 08:29:58.332658   10525 system_pods.go:89] "metrics-server-85b7d694d7-9tv6q" [1acee34d-7cc9-4f91-81a5-5af04cf36b68] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 08:29:58.332669   10525 system_pods.go:89] "nvidia-device-plugin-daemonset-mgdnq" [ba7d5636-4bd4-4737-a2f4-8b93aadfc08d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1206 08:29:58.332678   10525 system_pods.go:89] "registry-6b586f9694-45g8h" [9bf3de1f-8c67-4f56-8ed4-4820b8abc96d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1206 08:29:58.332687   10525 system_pods.go:89] "registry-creds-764b6fb674-qgdbz" [597094a8-35c3-4f4c-b160-93e5d951bc3b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1206 08:29:58.332694   10525 system_pods.go:89] "registry-proxy-nj49l" [6b459c6d-2dff-4d22-afc5-16895571af55] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1206 08:29:58.332703   10525 system_pods.go:89] "snapshot-controller-7d9fbc56b8-5scnk" [7d011425-02ab-4c8a-b267-36e33db2790d] Pending
	I1206 08:29:58.332709   10525 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mvfw9" [2ebcc929-b368-4571-bc60-16649c316fde] Pending
	I1206 08:29:58.332720   10525 system_pods.go:89] "storage-provisioner" [db8e1388-2d9d-4022-afb8-cd29b3ab2d3a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 08:29:58.332729   10525 system_pods.go:126] duration metric: took 80.696697ms to wait for k8s-apps to be running ...
	I1206 08:29:58.332741   10525 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 08:29:58.332824   10525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 08:29:58.492693   10525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1206 08:29:58.575975   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:58.576046   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:59.070416   10525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.519409848s)
	I1206 08:29:59.070457   10525 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-618522"
	I1206 08:29:59.070482   10525 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.242210015s)
	I1206 08:29:59.070517   10525 system_svc.go:56] duration metric: took 737.770542ms WaitForService to wait for kubelet
	I1206 08:29:59.070538   10525 kubeadm.go:587] duration metric: took 10.275150195s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 08:29:59.070664   10525 node_conditions.go:102] verifying NodePressure condition ...
	I1206 08:29:59.072201   10525 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1206 08:29:59.072202   10525 out.go:179] * Verifying csi-hostpath-driver addon...
	I1206 08:29:59.072722   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:59.074640   10525 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1206 08:29:59.075199   10525 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1206 08:29:59.076440   10525 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1206 08:29:59.076474   10525 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1206 08:29:59.101932   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:59.102082   10525 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1206 08:29:59.102102   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:29:59.140252   10525 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1206 08:29:59.140281   10525 node_conditions.go:123] node cpu capacity is 2
	I1206 08:29:59.140295   10525 node_conditions.go:105] duration metric: took 69.622332ms to run NodePressure ...
	I1206 08:29:59.140305   10525 start.go:242] waiting for startup goroutines ...
	I1206 08:29:59.242846   10525 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1206 08:29:59.242865   10525 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1206 08:29:59.383332   10525 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1206 08:29:59.383350   10525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1206 08:29:59.468731   10525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1206 08:29:59.562774   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:29:59.564379   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:29:59.580147   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:00.059201   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:00.059536   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:00.083969   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:00.560066   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:00.561127   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:00.582650   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:00.659928   10525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.167189738s)
	I1206 08:30:01.052650   10525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.583891498s)
	I1206 08:30:01.053581   10525 addons.go:495] Verifying addon gcp-auth=true in "addons-618522"
	I1206 08:30:01.054828   10525 out.go:179] * Verifying gcp-auth addon...
	I1206 08:30:01.057160   10525 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1206 08:30:01.086524   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:01.086804   10525 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1206 08:30:01.086818   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:01.086827   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:01.088817   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:01.558206   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:01.558382   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:01.565198   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:01.582023   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:02.057641   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:02.057670   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:02.059402   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:02.080352   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:02.564500   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:02.566369   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:02.566497   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:02.582081   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:03.060152   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:03.061359   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:03.065734   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:03.080964   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:03.559006   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:03.561593   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:03.561891   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:03.578482   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:04.067551   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:04.069268   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:04.069767   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:04.165251   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:04.557683   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:04.557819   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:04.563740   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:04.580431   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:05.066010   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:05.068942   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:05.072715   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:05.080696   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:05.556670   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:05.556985   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:05.560801   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:05.579001   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:06.056885   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:06.057112   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:06.061614   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:06.080108   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:06.556083   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:06.556825   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:06.560070   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:06.579182   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:07.056433   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:07.056910   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:07.061391   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:07.081977   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:07.556615   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:07.556937   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:07.561129   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:07.580221   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:08.057196   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:08.059048   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:08.060312   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:08.156490   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:08.556538   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:08.557141   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:08.560767   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:08.579085   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:09.056737   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:09.057045   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:09.061408   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:09.079521   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:09.555007   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:09.557637   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:09.560193   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:09.578748   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:10.057753   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:10.057900   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:10.060701   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:10.079761   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:10.556534   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:10.558271   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:10.560842   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:10.578634   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:11.056452   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:11.059092   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:11.060441   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:11.081797   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:11.555874   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:11.561558   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:11.566972   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:11.578978   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:12.058029   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:12.058155   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:12.061481   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:12.083400   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:12.558281   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:12.560596   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:12.563102   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:12.582021   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:13.059893   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:13.059957   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:13.063159   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:13.086101   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:13.765387   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:13.765500   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:13.767923   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:13.769020   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:14.066619   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:14.068734   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:14.068882   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:14.084093   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:14.571215   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:14.575404   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:14.575507   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:14.586291   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:15.059501   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:15.059501   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:15.062832   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:15.084649   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:15.557191   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:15.557372   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:15.559947   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:15.578570   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:16.056955   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:16.057058   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:16.061339   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:16.080059   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:16.558230   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:16.558530   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:16.560864   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:16.578764   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:17.056284   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:17.056436   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:17.061674   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:17.081408   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:17.555126   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:17.557423   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:17.560604   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:17.579967   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:18.059340   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:18.060319   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:18.062010   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:18.081309   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:18.561018   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:18.563393   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:18.564011   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:18.582939   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:19.061178   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:19.061291   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:19.065501   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:19.080497   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:19.556331   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:19.559333   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:19.561416   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:19.579635   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:20.056995   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:20.059209   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:20.061964   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:20.079819   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:20.555619   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:20.557460   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:20.560029   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:20.579159   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:21.182764   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:21.183087   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:21.183842   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:21.184208   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:21.557457   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:21.557584   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:21.562236   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:21.582260   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:22.058836   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:22.064879   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:22.066212   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:22.083236   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:22.558499   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:22.558656   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:22.564222   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:22.581090   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:23.058092   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:23.062017   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:23.063953   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:23.079856   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:23.560860   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:23.566724   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:23.569904   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:23.582480   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:24.059372   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:24.062781   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:24.063652   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:24.080339   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:24.563224   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:24.563236   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:24.565307   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:24.580665   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:25.061135   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:25.065257   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:25.065302   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:25.081534   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:25.555023   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:25.557751   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:25.562773   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:25.581784   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:26.547637   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:26.550720   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:26.550808   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:26.550902   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:26.555554   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:26.558819   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:26.561129   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:26.580323   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:27.061869   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:27.062914   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:27.063109   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:27.080933   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:27.556000   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:27.557630   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:27.561553   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:27.581326   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:28.066244   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:28.066328   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:28.066668   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:28.086811   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:28.558831   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:28.559169   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:28.565892   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:28.733895   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:29.059075   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:29.061546   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:29.065025   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:29.081273   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:29.562957   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:29.563427   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:29.569361   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:29.869108   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:30.057817   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:30.058057   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:30.060560   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:30.083935   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:30.557888   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:30.559573   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:30.564152   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:30.582104   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:31.060754   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:31.061881   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:31.062146   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:31.082955   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:31.557953   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:31.558071   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:31.563014   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:31.580838   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:32.057639   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:32.060269   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:32.061676   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:32.080198   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:32.558368   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:32.566158   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:32.567534   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:32.578604   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:33.100780   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:33.100896   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:33.101166   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:33.101307   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:33.568511   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:33.568555   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:33.568853   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:33.579460   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:34.057999   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:34.058144   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:34.060496   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:34.079210   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:34.557550   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:34.558137   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:34.560145   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:34.578497   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:35.055548   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:35.057246   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:35.060722   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:35.078426   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:35.556595   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:35.557446   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:35.560440   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:35.579158   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:36.056619   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:36.058063   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:36.061578   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:36.082897   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:36.562698   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:36.563318   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:36.569429   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:36.585415   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:37.058215   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:37.062186   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:37.063895   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:37.079208   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:37.558757   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:37.558908   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:37.562116   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:37.579277   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:38.057198   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 08:30:38.057291   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:38.059247   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:38.079464   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:38.555959   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:38.556861   10525 kapi.go:107] duration metric: took 40.504646698s to wait for kubernetes.io/minikube-addons=registry ...
	I1206 08:30:38.560323   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:38.579264   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:39.055386   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:39.061086   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:39.078994   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:39.556829   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:39.560573   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:39.578749   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:40.055192   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:40.064128   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:40.080296   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:40.555962   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:40.561962   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:40.581827   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:41.055856   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:41.061859   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:41.081246   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:41.558146   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:41.562511   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:41.583504   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:42.057200   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:42.064768   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:42.080174   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:42.558045   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:42.566152   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:42.585204   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:43.056251   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:43.061838   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:43.080558   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:43.640520   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:43.640906   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:43.641115   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:44.056720   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:44.060276   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:44.078637   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:44.556318   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:44.560430   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:44.579399   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:45.055113   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:45.060673   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:45.079549   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:45.559390   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:45.561847   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:45.580604   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:46.057573   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:46.063593   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:46.080393   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:46.556719   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:46.560415   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:46.583599   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:47.055231   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:47.064257   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:47.082571   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:47.554889   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:47.560655   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:47.580481   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:48.057889   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:48.061397   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:48.079509   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:48.556434   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:48.566571   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:48.583546   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:49.057119   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:49.062834   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:49.080356   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:49.555762   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:49.560646   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:49.579190   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:50.057144   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:50.064234   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:50.079443   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:50.554828   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:50.560692   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:50.579555   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:51.055943   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:51.062022   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:51.079282   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:51.556145   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:51.564624   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:51.581190   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:52.057530   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:52.062311   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:52.082636   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:52.555654   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:52.560637   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:52.579707   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:53.057181   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:53.060392   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:53.080962   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:53.559708   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:53.565993   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:53.580691   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:54.057875   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:54.062729   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:54.080162   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:54.563837   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:54.565427   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:54.583217   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:55.054675   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:55.069825   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:55.082983   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:55.557306   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:55.561424   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:55.580640   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:56.055460   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:56.060748   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:56.080212   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:56.564361   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:56.565593   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:56.579600   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:57.058806   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:57.064478   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:57.085548   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:57.556298   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:57.560522   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:57.580728   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:58.067087   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:58.067363   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:58.085258   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:58.556708   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:58.564193   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:58.582093   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:59.055395   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:59.062357   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:59.081128   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:30:59.556931   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:30:59.560489   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:30:59.579627   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:31:00.057933   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:31:00.063621   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:31:00.082515   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:31:00.555574   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:31:00.562058   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:31:00.579200   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:31:01.080204   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:31:01.080902   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:31:01.086276   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:31:01.557706   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:31:01.560433   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:31:01.580222   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:31:02.072131   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:31:02.072679   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:31:02.085928   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:31:02.574528   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:31:02.575237   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:31:02.579355   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:31:03.070006   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:31:03.070129   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:31:03.090947   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:31:03.561265   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:31:03.563776   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:31:03.581416   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:31:04.064599   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:31:04.064764   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:31:04.086494   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:31:04.567573   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:31:04.567573   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:31:04.599422   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:31:05.063171   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:31:05.066378   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:31:05.079388   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:31:05.561928   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:31:05.569211   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:31:05.583952   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:31:06.137782   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:31:06.140205   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:31:06.140370   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:31:06.561584   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:31:06.565256   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:31:06.586400   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:31:07.058483   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:31:07.068151   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:31:07.080699   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:31:07.559034   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:31:07.573129   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:31:07.579637   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:31:08.056155   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:31:08.061068   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:31:08.080236   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:31:08.560948   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:31:08.561916   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:31:08.579711   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:31:09.055514   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:31:09.060937   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:31:09.080000   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:31:09.555901   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:31:09.563029   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:31:09.582605   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:31:10.055376   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:31:10.060625   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:31:10.083187   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:31:10.557047   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:31:10.561853   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:31:10.578802   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:31:11.059344   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:31:11.063972   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:31:11.083976   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:31:11.558427   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:31:11.560928   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:31:11.578629   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:31:12.059358   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:31:12.063078   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:31:12.078261   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:31:12.558965   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:31:12.562606   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:31:12.580999   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:31:13.056996   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:31:13.061222   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:31:13.079097   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:31:13.561176   10525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 08:31:13.563065   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:31:13.579286   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:31:14.055777   10525 kapi.go:107] duration metric: took 1m16.004311352s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1206 08:31:14.060954   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:31:14.079658   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:31:14.562725   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:31:14.579435   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:31:15.061879   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:31:15.079159   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:31:15.561885   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:31:15.578751   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:31:16.061272   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:31:16.078888   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:31:16.569267   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:31:16.581077   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:31:17.061574   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:31:17.083428   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:31:17.561634   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:31:17.579991   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:31:18.063339   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:31:18.080064   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:31:18.561562   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 08:31:18.579863   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:31:19.062302   10525 kapi.go:107] duration metric: took 1m18.00514191s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1206 08:31:19.064040   10525 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-618522 cluster.
	I1206 08:31:19.065372   10525 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1206 08:31:19.066757   10525 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1206 08:31:19.085693   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:31:19.579899   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:31:20.080404   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:31:20.579101   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:31:21.082024   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:31:21.580715   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:31:22.079678   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:31:22.579528   10525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 08:31:23.079330   10525 kapi.go:107] duration metric: took 1m24.004687821s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1206 08:31:23.081024   10525 out.go:179] * Enabled addons: inspektor-gadget, registry-creds, nvidia-device-plugin, storage-provisioner, cloud-spanner, ingress-dns, amd-gpu-device-plugin, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1206 08:31:23.082139   10525 addons.go:530] duration metric: took 1m34.286724246s for enable addons: enabled=[inspektor-gadget registry-creds nvidia-device-plugin storage-provisioner cloud-spanner ingress-dns amd-gpu-device-plugin metrics-server yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1206 08:31:23.082186   10525 start.go:247] waiting for cluster config update ...
	I1206 08:31:23.082212   10525 start.go:256] writing updated cluster config ...
	I1206 08:31:23.082623   10525 ssh_runner.go:195] Run: rm -f paused
	I1206 08:31:23.090080   10525 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 08:31:23.094190   10525 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-7c7k7" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 08:31:23.099658   10525 pod_ready.go:94] pod "coredns-66bc5c9577-7c7k7" is "Ready"
	I1206 08:31:23.099683   10525 pod_ready.go:86] duration metric: took 5.470554ms for pod "coredns-66bc5c9577-7c7k7" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 08:31:23.102222   10525 pod_ready.go:83] waiting for pod "etcd-addons-618522" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 08:31:23.109942   10525 pod_ready.go:94] pod "etcd-addons-618522" is "Ready"
	I1206 08:31:23.109975   10525 pod_ready.go:86] duration metric: took 7.728641ms for pod "etcd-addons-618522" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 08:31:23.114550   10525 pod_ready.go:83] waiting for pod "kube-apiserver-addons-618522" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 08:31:23.122329   10525 pod_ready.go:94] pod "kube-apiserver-addons-618522" is "Ready"
	I1206 08:31:23.122366   10525 pod_ready.go:86] duration metric: took 7.78139ms for pod "kube-apiserver-addons-618522" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 08:31:23.125252   10525 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-618522" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 08:31:23.494708   10525 pod_ready.go:94] pod "kube-controller-manager-addons-618522" is "Ready"
	I1206 08:31:23.494748   10525 pod_ready.go:86] duration metric: took 369.464687ms for pod "kube-controller-manager-addons-618522" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 08:31:23.694607   10525 pod_ready.go:83] waiting for pod "kube-proxy-g62jv" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 08:31:24.095376   10525 pod_ready.go:94] pod "kube-proxy-g62jv" is "Ready"
	I1206 08:31:24.095400   10525 pod_ready.go:86] duration metric: took 400.765965ms for pod "kube-proxy-g62jv" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 08:31:24.295544   10525 pod_ready.go:83] waiting for pod "kube-scheduler-addons-618522" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 08:31:24.694310   10525 pod_ready.go:94] pod "kube-scheduler-addons-618522" is "Ready"
	I1206 08:31:24.694335   10525 pod_ready.go:86] duration metric: took 398.772619ms for pod "kube-scheduler-addons-618522" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 08:31:24.694347   10525 pod_ready.go:40] duration metric: took 1.604236047s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 08:31:24.742491   10525 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1206 08:31:24.744417   10525 out.go:179] * Done! kubectl is now configured to use "addons-618522" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 06 08:34:19 addons-618522 crio[815]: time="2025-12-06 08:34:19.627566109Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=42691043-31ac-448a-855e-87261df80e70 name=/runtime.v1.RuntimeService/Version
	Dec 06 08:34:19 addons-618522 crio[815]: time="2025-12-06 08:34:19.629907559Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=43e8b58a-d353-4437-aee3-e66e2896f189 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 08:34:19 addons-618522 crio[815]: time="2025-12-06 08:34:19.631973167Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765010059631865564,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585495,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=43e8b58a-d353-4437-aee3-e66e2896f189 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 08:34:19 addons-618522 crio[815]: time="2025-12-06 08:34:19.633644102Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a19d6237-9435-41ae-b4ea-fd5990bf04ab name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 08:34:19 addons-618522 crio[815]: time="2025-12-06 08:34:19.633728182Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a19d6237-9435-41ae-b4ea-fd5990bf04ab name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 08:34:19 addons-618522 crio[815]: time="2025-12-06 08:34:19.634181298Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:691f4d648fd2b77571c433e75c6c0aa41c5be67869b9293fe4b511e394cd4566,PodSandboxId:6b4883c8b37cf54998971cda223aee893993a0d010650a89012d0109ee21d649,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1765009919032076613,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1d05c5f3-11c3-43f8-871c-1feba1d97857,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a79a7075aae608e30eb69ffd592b0bb47fbbd93d6714173436f1d16378752e4,PodSandboxId:68c49695e8e2107927cc584b310aec0aed89246aa314c86ebcbf54b4eacdef46,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765009889945659194,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 28642f2b-ea29-4744-a69a-ca5940220bc5,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:052f5654957246b5af7941d2a478138893d80c037a727f1f6813ebf93432ac17,PodSandboxId:5155eb89959d2f9bbe8e798d2c178be539eabf19d43f01f998e40778f1f2f389,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765009872709953434,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-kqfmh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e0929d19-ff6d-4c68-9412-fb5b07ffdbc0,},Annotations:map[string]string{io.kubernetes.
container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:d150608cd68e068f00224c1f99416559f82a3f1aeb0427ab691bff677e324b3b,PodSandboxId:d6bb7cc58913968e800f1f3fc42a4d4a40604533813a7ab72d353a44dee72a91,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258
ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765009860796441376,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-z9k7w,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 00f6c593-e4cd-444f-aba7-339ba75535f7,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b561a47833358a4bf2821d95579d79a3c858664e9c1ee1d0a0623d1ba993837b,PodSandboxId:3ea76a13c4ee4ca508e855f135f24f8e86c6a4dbe6e6f53616400278740d7923,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765009853632143591,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-4lxk7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 619ee1c1-b56d-499e-ab95-7258e5762c45,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83955f1142946d7799f8db0f9c9342642b9fc3c3d429f6da6bd43d36dd032a0e,PodSandboxId:e3dfd570c3797ca4ee0cb188410f6886d83dda1aa9af253c73011b2119ed8b17,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotatio
ns:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1765009852188680705,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-qhj42,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 36a19c9b-df13-4ae3-ad0a-aa86540f0692,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af2310b5caf206d277b5b5f1aaecc92cb6e653b3a0d539da262cba2feb6e06f0,PodSandboxId:e78546dbd1eb53ffc0f7df71c26d0f0a7471ecf88eef4758e14aaf8940f418fa,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76
812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765009830013101540,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96c41d37-7317-4033-b500-9fcd4e3ea24b,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5565bf8b9a19301f11d244ad76fcdac348993891755a957194bc89fdd72339cb,PodSandboxId:0b4cdbdbe9bc15467f0948ec184e0c1826e7c9a234c7902b4a5baf5382e52fcf,Metadata:
&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765009808083187142,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-2k5hq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5883664-cfdc-4af0-8f2c-6404a2eb83dd,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5196916f6baf47b445b94972b7e511739075008df75b03baf2c42ddc38d8b404,PodSandboxId:50198ca0b5791251bb2c823d990754eb12
713324465bc71625fb9b49e65226f5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765009798510773415,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db8e1388-2d9d-4022-afb8-cd29b3ab2d3a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e6332cba2bafb17156e934f13eca1d36e74d75167c2a8796e3d86e89b9ff06e,PodSandboxId:5974b2450b9eeaa2d71b23fe75374333c4725dd83dcaa0
eca69a1571742bd8ce,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765009790254729674,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7c7k7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb10465b-d4eb-4157-8fba-f9ecee814344,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kuber
netes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34ae256e98e16bd81ec742594553ccc43e3e85a83aea2763892cbc386f010836,PodSandboxId:3c711959e72ff170df53a1d1ef8446577d6192fa8abefe6630ecbe4b2888b63a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765009789731326334,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g62jv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dc778d5-5fb1-4e20-be27-75b606e19155,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41d417bd2e46f5280d633688820c441dcb6a2fef5b1b82d8be3d18480913bbb3,PodSandboxId:f0f8dfdcd430992f1681c0955d8a15af1b28088460392e90060ca09090f8c3cb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765009777277684441,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-618522,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae37787e7ba11c90d5ad8259c870c576,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPo
rt\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4b72836806c7676ea45017355553dd89e24109180f8bb53dfa55d87f396a817,PodSandboxId:981203d6ff56d6294885064815cea7c44b5b3b8a82cd574aab675216ece7ce5d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765009777244661165,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-618522,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48fefc1bed6c56770bb0acf517512f62,},Annotations:map[string]string{io.kubernetes.container
.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0775959bef832653f048d0e59bc08f7c21e92bb187e7962c94eb2ff697c8d00,PodSandboxId:49bf54d6e2f6456e4c6359d1bf393427631b9bb3fa712abac3d49db7109336d0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765009777253642263,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-618522,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: be5217949a7eee65cb54529bc9a96202,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad4e3637fc7fbd9a55214bed416a53f59f65b4efa5a8a55e1a5bf335b334a60b,PodSandboxId:5b338c173ba94a3ceedcfd8a2a0c929336fb84dc09f01ab5ce43da27e8672968,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765009777212090330,Labels:map[string]string{io.kubernet
es.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-618522,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 814b02689101d7cfa34ab67b41e9b59d,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a19d6237-9435-41ae-b4ea-fd5990bf04ab name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 08:34:19 addons-618522 crio[815]: time="2025-12-06 08:34:19.637896372Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=5c82db7e-668d-405c-b137-cbc81b0c2408 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 06 08:34:19 addons-618522 crio[815]: time="2025-12-06 08:34:19.638990607Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:3798c65617666d5a9f9c76f6ef2d0d3586700088ad4a5392ba0ea04a980a54af,Metadata:&PodSandboxMetadata{Name:hello-world-app-5d498dc89-q49v8,Uid:ecbf3cf2-44ce-4aec-8f0f-5b2f8ef852b6,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765010058735682053,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-5d498dc89-q49v8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ecbf3cf2-44ce-4aec-8f0f-5b2f8ef852b6,pod-template-hash: 5d498dc89,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-06T08:34:18.415984282Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6b4883c8b37cf54998971cda223aee893993a0d010650a89012d0109ee21d649,Metadata:&PodSandboxMetadata{Name:nginx,Uid:1d05c5f3-11c3-43f8-871c-1feba1d97857,Namespace:default,Attempt:0,},St
ate:SANDBOX_READY,CreatedAt:1765009914533988654,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1d05c5f3-11c3-43f8-871c-1feba1d97857,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-06T08:31:54.208236440Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:68c49695e8e2107927cc584b310aec0aed89246aa314c86ebcbf54b4eacdef46,Metadata:&PodSandboxMetadata{Name:busybox,Uid:28642f2b-ea29-4744-a69a-ca5940220bc5,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765009885680701060,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 28642f2b-ea29-4744-a69a-ca5940220bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-06T08:31:25.350254189Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5155eb89959d2f9bbe8e7
98d2c178be539eabf19d43f01f998e40778f1f2f389,Metadata:&PodSandboxMetadata{Name:ingress-nginx-controller-85d4c799dd-kqfmh,Uid:e0929d19-ff6d-4c68-9412-fb5b07ffdbc0,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765009862588653526,Labels:map[string]string{app.kubernetes.io/component: controller,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-kqfmh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e0929d19-ff6d-4c68-9412-fb5b07ffdbc0,pod-template-hash: 85d4c799dd,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-06T08:29:57.579041444Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d6bb7cc58913968e800f1f3fc42a4d4a40604533813a7ab72d353a44dee72a91,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-patch-z9k7w,Uid:00f6c593-e4cd-444f-aba7-339ba75535f7,Namespace:ingress-nginx,Attempt:0,},St
ate:SANDBOX_NOTREADY,CreatedAt:1765009799593521841,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,batch.kubernetes.io/controller-uid: b94d31a2-3ea6-424f-b117-2245a8ecfe0e,batch.kubernetes.io/job-name: ingress-nginx-admission-patch,controller-uid: b94d31a2-3ea6-424f-b117-2245a8ecfe0e,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-admission-patch-z9k7w,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 00f6c593-e4cd-444f-aba7-339ba75535f7,job-name: ingress-nginx-admission-patch,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-06T08:29:57.985525166Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3ea76a13c4ee4ca508e855f135f24f8e86c6a4dbe6e6f53616400278740d7923,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-create-4lxk7,Uid:619ee1c1-b56d-499e-ab95-7258e5762c45,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,Crea
tedAt:1765009798930155610,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,batch.kubernetes.io/controller-uid: 180e75e6-2c89-4a4d-9552-a18b59e70f27,batch.kubernetes.io/job-name: ingress-nginx-admission-create,controller-uid: 180e75e6-2c89-4a4d-9552-a18b59e70f27,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-admission-create-4lxk7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 619ee1c1-b56d-499e-ab95-7258e5762c45,job-name: ingress-nginx-admission-create,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-06T08:29:57.864293924Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e3dfd570c3797ca4ee0cb188410f6886d83dda1aa9af253c73011b2119ed8b17,Metadata:&PodSandboxMetadata{Name:local-path-provisioner-648f6765c9-qhj42,Uid:36a19c9b-df13-4ae3-ad0a-aa86540f0692,Namespace:local-path-storage,Attempt:0,},State:SANDBOX_READY,CreatedAt:17650097964
75973827,Labels:map[string]string{app: local-path-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-qhj42,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 36a19c9b-df13-4ae3-ad0a-aa86540f0692,pod-template-hash: 648f6765c9,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-06T08:29:55.693116419Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:50198ca0b5791251bb2c823d990754eb12713324465bc71625fb9b49e65226f5,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:db8e1388-2d9d-4022-afb8-cd29b3ab2d3a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765009796403927158,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db8e1388-2d9d-4022-afb8-cd29b3ab2d3a,},Annotations:map[string]string{kubectl
.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-12-06T08:29:55.933135339Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e78546dbd1eb53ffc0f7df71c26d0f0a7471ecf88eef4758e14aaf8940f418fa,Metadata:&PodSandboxMetadata{Name:kube-ingress-dns-minikube,Uid:96c41d37-7317-4033-b500-9fcd4e3ea24b,Namespace:kube-system,Attem
pt:0,},State:SANDBOX_READY,CreatedAt:1765009795153609049,Labels:map[string]string{app: minikube-ingress-dns,app.kubernetes.io/part-of: kube-system,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96c41d37-7317-4033-b500-9fcd4e3ea24b,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"minikube-ingress-dns\",\"app.kubernetes.io/part-of\":\"kube-system\"},\"name\":\"kube-ingress-dns-minikube\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"DNS_PORT\",\"value\":\"53\"},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}}],\"image\":\"docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"minikube-ingress-dns\",\"ports\":[
{\"containerPort\":53,\"hostPort\":53,\"protocol\":\"UDP\"}],\"volumeMounts\":[{\"mountPath\":\"/config\",\"name\":\"minikube-ingress-dns-config-volume\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"minikube-ingress-dns\",\"volumes\":[{\"configMap\":{\"name\":\"minikube-ingress-dns\"},\"name\":\"minikube-ingress-dns-config-volume\"}]}}\n,kubernetes.io/config.seen: 2025-12-06T08:29:54.514897358Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0b4cdbdbe9bc15467f0948ec184e0c1826e7c9a234c7902b4a5baf5382e52fcf,Metadata:&PodSandboxMetadata{Name:amd-gpu-device-plugin-2k5hq,Uid:c5883664-cfdc-4af0-8f2c-6404a2eb83dd,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765009792911271816,Labels:map[string]string{controller-revision-hash: 7f87d6fd8d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: amd-gpu-device-plugin-2k5hq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5883664-cfdc-4af0-8f2c-6404a2eb83dd,k8s-app: amd-gpu-device-plugin,name: amd-gpu-device-
plugin,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-06T08:29:52.544971495Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3c711959e72ff170df53a1d1ef8446577d6192fa8abefe6630ecbe4b2888b63a,Metadata:&PodSandboxMetadata{Name:kube-proxy-g62jv,Uid:2dc778d5-5fb1-4e20-be27-75b606e19155,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765009789343729602,Labels:map[string]string{controller-revision-hash: 66d5f8d6f6,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-g62jv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dc778d5-5fb1-4e20-be27-75b606e19155,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-06T08:29:48.397922546Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5974b2450b9eeaa2d71b23fe75374333c4725dd83dcaa0eca69a1571742bd8ce,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-7c7k7,Uid:fb10465b-d4eb-4157-8
fba-f9ecee814344,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765009789278591435,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-7c7k7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb10465b-d4eb-4157-8fba-f9ecee814344,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-06T08:29:48.899129298Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:49bf54d6e2f6456e4c6359d1bf393427631b9bb3fa712abac3d49db7109336d0,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-618522,Uid:be5217949a7eee65cb54529bc9a96202,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765009777022619606,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-618522,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be5217949a7eee65cb54529bc9a962
02,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: be5217949a7eee65cb54529bc9a96202,kubernetes.io/config.seen: 2025-12-06T08:29:36.488516935Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:981203d6ff56d6294885064815cea7c44b5b3b8a82cd574aab675216ece7ce5d,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-618522,Uid:48fefc1bed6c56770bb0acf517512f62,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765009777020998303,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-618522,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48fefc1bed6c56770bb0acf517512f62,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 48fefc1bed6c56770bb0acf517512f62,kubernetes.io/config.seen: 2025-12-06T08:29:36.488517943Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f0f8dfdcd430992f1681c0955d8a15af1b28088460392e90060ca09090f
8c3cb,Metadata:&PodSandboxMetadata{Name:etcd-addons-618522,Uid:ae37787e7ba11c90d5ad8259c870c576,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765009777016033994,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-618522,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae37787e7ba11c90d5ad8259c870c576,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.168:2379,kubernetes.io/config.hash: ae37787e7ba11c90d5ad8259c870c576,kubernetes.io/config.seen: 2025-12-06T08:29:36.488518896Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5b338c173ba94a3ceedcfd8a2a0c929336fb84dc09f01ab5ce43da27e8672968,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-618522,Uid:814b02689101d7cfa34ab67b41e9b59d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765009777012964483,Labels:map[string]string{component: kube-apiserver,io.kubernete
s.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-618522,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 814b02689101d7cfa34ab67b41e9b59d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.168:8443,kubernetes.io/config.hash: 814b02689101d7cfa34ab67b41e9b59d,kubernetes.io/config.seen: 2025-12-06T08:29:36.488512965Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=5c82db7e-668d-405c-b137-cbc81b0c2408 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 06 08:34:19 addons-618522 crio[815]: time="2025-12-06 08:34:19.640681374Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c0525c69-90b6-47f5-9dad-184257eb1c87 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 08:34:19 addons-618522 crio[815]: time="2025-12-06 08:34:19.640765910Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c0525c69-90b6-47f5-9dad-184257eb1c87 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 08:34:19 addons-618522 crio[815]: time="2025-12-06 08:34:19.641921388Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:691f4d648fd2b77571c433e75c6c0aa41c5be67869b9293fe4b511e394cd4566,PodSandboxId:6b4883c8b37cf54998971cda223aee893993a0d010650a89012d0109ee21d649,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1765009919032076613,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1d05c5f3-11c3-43f8-871c-1feba1d97857,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a79a7075aae608e30eb69ffd592b0bb47fbbd93d6714173436f1d16378752e4,PodSandboxId:68c49695e8e2107927cc584b310aec0aed89246aa314c86ebcbf54b4eacdef46,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765009889945659194,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 28642f2b-ea29-4744-a69a-ca5940220bc5,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:052f5654957246b5af7941d2a478138893d80c037a727f1f6813ebf93432ac17,PodSandboxId:5155eb89959d2f9bbe8e798d2c178be539eabf19d43f01f998e40778f1f2f389,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765009872709953434,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-kqfmh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e0929d19-ff6d-4c68-9412-fb5b07ffdbc0,},Annotations:map[string]string{io.kubernetes.
container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:d150608cd68e068f00224c1f99416559f82a3f1aeb0427ab691bff677e324b3b,PodSandboxId:d6bb7cc58913968e800f1f3fc42a4d4a40604533813a7ab72d353a44dee72a91,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258
ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765009860796441376,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-z9k7w,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 00f6c593-e4cd-444f-aba7-339ba75535f7,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b561a47833358a4bf2821d95579d79a3c858664e9c1ee1d0a0623d1ba993837b,PodSandboxId:3ea76a13c4ee4ca508e855f135f24f8e86c6a4dbe6e6f53616400278740d7923,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765009853632143591,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-4lxk7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 619ee1c1-b56d-499e-ab95-7258e5762c45,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83955f1142946d7799f8db0f9c9342642b9fc3c3d429f6da6bd43d36dd032a0e,PodSandboxId:e3dfd570c3797ca4ee0cb188410f6886d83dda1aa9af253c73011b2119ed8b17,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotatio
ns:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1765009852188680705,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-qhj42,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 36a19c9b-df13-4ae3-ad0a-aa86540f0692,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af2310b5caf206d277b5b5f1aaecc92cb6e653b3a0d539da262cba2feb6e06f0,PodSandboxId:e78546dbd1eb53ffc0f7df71c26d0f0a7471ecf88eef4758e14aaf8940f418fa,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76
812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765009830013101540,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96c41d37-7317-4033-b500-9fcd4e3ea24b,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5565bf8b9a19301f11d244ad76fcdac348993891755a957194bc89fdd72339cb,PodSandboxId:0b4cdbdbe9bc15467f0948ec184e0c1826e7c9a234c7902b4a5baf5382e52fcf,Metadata:
&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765009808083187142,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-2k5hq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5883664-cfdc-4af0-8f2c-6404a2eb83dd,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5196916f6baf47b445b94972b7e511739075008df75b03baf2c42ddc38d8b404,PodSandboxId:50198ca0b5791251bb2c823d990754eb12
713324465bc71625fb9b49e65226f5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765009798510773415,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db8e1388-2d9d-4022-afb8-cd29b3ab2d3a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e6332cba2bafb17156e934f13eca1d36e74d75167c2a8796e3d86e89b9ff06e,PodSandboxId:5974b2450b9eeaa2d71b23fe75374333c4725dd83dcaa0
eca69a1571742bd8ce,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765009790254729674,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7c7k7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb10465b-d4eb-4157-8fba-f9ecee814344,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kuber
netes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34ae256e98e16bd81ec742594553ccc43e3e85a83aea2763892cbc386f010836,PodSandboxId:3c711959e72ff170df53a1d1ef8446577d6192fa8abefe6630ecbe4b2888b63a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765009789731326334,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g62jv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dc778d5-5fb1-4e20-be27-75b606e19155,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41d417bd2e46f5280d633688820c441dcb6a2fef5b1b82d8be3d18480913bbb3,PodSandboxId:f0f8dfdcd430992f1681c0955d8a15af1b28088460392e90060ca09090f8c3cb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765009777277684441,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-618522,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae37787e7ba11c90d5ad8259c870c576,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPo
rt\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4b72836806c7676ea45017355553dd89e24109180f8bb53dfa55d87f396a817,PodSandboxId:981203d6ff56d6294885064815cea7c44b5b3b8a82cd574aab675216ece7ce5d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765009777244661165,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-618522,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48fefc1bed6c56770bb0acf517512f62,},Annotations:map[string]string{io.kubernetes.container
.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0775959bef832653f048d0e59bc08f7c21e92bb187e7962c94eb2ff697c8d00,PodSandboxId:49bf54d6e2f6456e4c6359d1bf393427631b9bb3fa712abac3d49db7109336d0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765009777253642263,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-618522,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: be5217949a7eee65cb54529bc9a96202,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad4e3637fc7fbd9a55214bed416a53f59f65b4efa5a8a55e1a5bf335b334a60b,PodSandboxId:5b338c173ba94a3ceedcfd8a2a0c929336fb84dc09f01ab5ce43da27e8672968,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765009777212090330,Labels:map[string]string{io.kubernet
es.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-618522,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 814b02689101d7cfa34ab67b41e9b59d,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c0525c69-90b6-47f5-9dad-184257eb1c87 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 08:34:19 addons-618522 crio[815]: time="2025-12-06 08:34:19.643042276Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:nil,LabelSelector:map[string]string{io.kubernetes.pod.uid: ecbf3cf2-44ce-4aec-8f0f-5b2f8ef852b6,},},}" file="otel-collector/interceptors.go:62" id=3d66829b-871b-4e5c-8fcf-62e9a884aabe name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 06 08:34:19 addons-618522 crio[815]: time="2025-12-06 08:34:19.643172200Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:3798c65617666d5a9f9c76f6ef2d0d3586700088ad4a5392ba0ea04a980a54af,Metadata:&PodSandboxMetadata{Name:hello-world-app-5d498dc89-q49v8,Uid:ecbf3cf2-44ce-4aec-8f0f-5b2f8ef852b6,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765010058735682053,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-5d498dc89-q49v8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ecbf3cf2-44ce-4aec-8f0f-5b2f8ef852b6,pod-template-hash: 5d498dc89,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-06T08:34:18.415984282Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=3d66829b-871b-4e5c-8fcf-62e9a884aabe name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 06 08:34:19 addons-618522 crio[815]: time="2025-12-06 08:34:19.645163529Z" level=debug msg="Request: &PodSandboxStatusRequest{PodSandboxId:3798c65617666d5a9f9c76f6ef2d0d3586700088ad4a5392ba0ea04a980a54af,Verbose:false,}" file="otel-collector/interceptors.go:62" id=36bc79fc-7af2-46b9-a0ae-5c8b1bd33c9b name=/runtime.v1.RuntimeService/PodSandboxStatus
	Dec 06 08:34:19 addons-618522 crio[815]: time="2025-12-06 08:34:19.645264640Z" level=debug msg="Response: &PodSandboxStatusResponse{Status:&PodSandboxStatus{Id:3798c65617666d5a9f9c76f6ef2d0d3586700088ad4a5392ba0ea04a980a54af,Metadata:&PodSandboxMetadata{Name:hello-world-app-5d498dc89-q49v8,Uid:ecbf3cf2-44ce-4aec-8f0f-5b2f8ef852b6,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765010058735682053,Network:&PodSandboxNetworkStatus{Ip:10.244.0.33,AdditionalIps:[]*PodIP{},},Linux:&LinuxPodSandboxStatus{Namespaces:&Namespace{Options:&NamespaceOption{Network:POD,Pid:CONTAINER,Ipc:POD,TargetId:,UsernsOptions:&UserNamespace{Mode:NODE,Uids:[]*IDMapping{},Gids:[]*IDMapping{},},},},},Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-5d498dc89-q49v8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ecbf3cf2-44ce-4aec-8f0f-5b2f8ef852b6,pod-template-hash: 5d498dc89,},Annotations:map[string]string{kubernetes.io/config.seen:
2025-12-06T08:34:18.415984282Z,kubernetes.io/config.source: api,},RuntimeHandler:,},Info:map[string]string{},ContainersStatuses:[]*ContainerStatus{},Timestamp:0,}" file="otel-collector/interceptors.go:74" id=36bc79fc-7af2-46b9-a0ae-5c8b1bd33c9b name=/runtime.v1.RuntimeService/PodSandboxStatus
	Dec 06 08:34:19 addons-618522 crio[815]: time="2025-12-06 08:34:19.645710464Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{io.kubernetes.pod.uid: ecbf3cf2-44ce-4aec-8f0f-5b2f8ef852b6,},},}" file="otel-collector/interceptors.go:62" id=bcc0a477-9d10-463e-8e5f-d48618a09974 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 08:34:19 addons-618522 crio[815]: time="2025-12-06 08:34:19.645854901Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bcc0a477-9d10-463e-8e5f-d48618a09974 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 08:34:19 addons-618522 crio[815]: time="2025-12-06 08:34:19.645929265Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=bcc0a477-9d10-463e-8e5f-d48618a09974 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 08:34:19 addons-618522 crio[815]: time="2025-12-06 08:34:19.674953726Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=36793827-889b-4a36-820a-5dd20fec522d name=/runtime.v1.RuntimeService/Version
	Dec 06 08:34:19 addons-618522 crio[815]: time="2025-12-06 08:34:19.675031384Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=36793827-889b-4a36-820a-5dd20fec522d name=/runtime.v1.RuntimeService/Version
	Dec 06 08:34:19 addons-618522 crio[815]: time="2025-12-06 08:34:19.677084009Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=99db151d-c823-428e-96e0-e9e61a22a0fd name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 08:34:19 addons-618522 crio[815]: time="2025-12-06 08:34:19.678411712Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765010059678381643,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585495,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=99db151d-c823-428e-96e0-e9e61a22a0fd name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 08:34:19 addons-618522 crio[815]: time="2025-12-06 08:34:19.679680723Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b6f98e07-547c-4e36-8b49-c8917deaad5e name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 08:34:19 addons-618522 crio[815]: time="2025-12-06 08:34:19.679738416Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b6f98e07-547c-4e36-8b49-c8917deaad5e name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 08:34:19 addons-618522 crio[815]: time="2025-12-06 08:34:19.680199591Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:691f4d648fd2b77571c433e75c6c0aa41c5be67869b9293fe4b511e394cd4566,PodSandboxId:6b4883c8b37cf54998971cda223aee893993a0d010650a89012d0109ee21d649,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1765009919032076613,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1d05c5f3-11c3-43f8-871c-1feba1d97857,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a79a7075aae608e30eb69ffd592b0bb47fbbd93d6714173436f1d16378752e4,PodSandboxId:68c49695e8e2107927cc584b310aec0aed89246aa314c86ebcbf54b4eacdef46,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765009889945659194,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 28642f2b-ea29-4744-a69a-ca5940220bc5,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:052f5654957246b5af7941d2a478138893d80c037a727f1f6813ebf93432ac17,PodSandboxId:5155eb89959d2f9bbe8e798d2c178be539eabf19d43f01f998e40778f1f2f389,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765009872709953434,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-kqfmh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e0929d19-ff6d-4c68-9412-fb5b07ffdbc0,},Annotations:map[string]string{io.kubernetes.
container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:d150608cd68e068f00224c1f99416559f82a3f1aeb0427ab691bff677e324b3b,PodSandboxId:d6bb7cc58913968e800f1f3fc42a4d4a40604533813a7ab72d353a44dee72a91,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258
ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765009860796441376,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-z9k7w,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 00f6c593-e4cd-444f-aba7-339ba75535f7,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b561a47833358a4bf2821d95579d79a3c858664e9c1ee1d0a0623d1ba993837b,PodSandboxId:3ea76a13c4ee4ca508e855f135f24f8e86c6a4dbe6e6f53616400278740d7923,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765009853632143591,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-4lxk7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 619ee1c1-b56d-499e-ab95-7258e5762c45,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83955f1142946d7799f8db0f9c9342642b9fc3c3d429f6da6bd43d36dd032a0e,PodSandboxId:e3dfd570c3797ca4ee0cb188410f6886d83dda1aa9af253c73011b2119ed8b17,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotatio
ns:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1765009852188680705,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-qhj42,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 36a19c9b-df13-4ae3-ad0a-aa86540f0692,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af2310b5caf206d277b5b5f1aaecc92cb6e653b3a0d539da262cba2feb6e06f0,PodSandboxId:e78546dbd1eb53ffc0f7df71c26d0f0a7471ecf88eef4758e14aaf8940f418fa,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76
812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765009830013101540,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96c41d37-7317-4033-b500-9fcd4e3ea24b,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5565bf8b9a19301f11d244ad76fcdac348993891755a957194bc89fdd72339cb,PodSandboxId:0b4cdbdbe9bc15467f0948ec184e0c1826e7c9a234c7902b4a5baf5382e52fcf,Metadata:
&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765009808083187142,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-2k5hq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5883664-cfdc-4af0-8f2c-6404a2eb83dd,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5196916f6baf47b445b94972b7e511739075008df75b03baf2c42ddc38d8b404,PodSandboxId:50198ca0b5791251bb2c823d990754eb12
713324465bc71625fb9b49e65226f5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765009798510773415,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db8e1388-2d9d-4022-afb8-cd29b3ab2d3a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e6332cba2bafb17156e934f13eca1d36e74d75167c2a8796e3d86e89b9ff06e,PodSandboxId:5974b2450b9eeaa2d71b23fe75374333c4725dd83dcaa0
eca69a1571742bd8ce,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765009790254729674,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-7c7k7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb10465b-d4eb-4157-8fba-f9ecee814344,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kuber
netes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34ae256e98e16bd81ec742594553ccc43e3e85a83aea2763892cbc386f010836,PodSandboxId:3c711959e72ff170df53a1d1ef8446577d6192fa8abefe6630ecbe4b2888b63a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765009789731326334,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g62jv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dc778d5-5fb1-4e20-be27-75b606e19155,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41d417bd2e46f5280d633688820c441dcb6a2fef5b1b82d8be3d18480913bbb3,PodSandboxId:f0f8dfdcd430992f1681c0955d8a15af1b28088460392e90060ca09090f8c3cb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765009777277684441,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-618522,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae37787e7ba11c90d5ad8259c870c576,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPo
rt\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4b72836806c7676ea45017355553dd89e24109180f8bb53dfa55d87f396a817,PodSandboxId:981203d6ff56d6294885064815cea7c44b5b3b8a82cd574aab675216ece7ce5d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765009777244661165,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-618522,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48fefc1bed6c56770bb0acf517512f62,},Annotations:map[string]string{io.kubernetes.container
.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0775959bef832653f048d0e59bc08f7c21e92bb187e7962c94eb2ff697c8d00,PodSandboxId:49bf54d6e2f6456e4c6359d1bf393427631b9bb3fa712abac3d49db7109336d0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765009777253642263,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-618522,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: be5217949a7eee65cb54529bc9a96202,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad4e3637fc7fbd9a55214bed416a53f59f65b4efa5a8a55e1a5bf335b334a60b,PodSandboxId:5b338c173ba94a3ceedcfd8a2a0c929336fb84dc09f01ab5ce43da27e8672968,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765009777212090330,Labels:map[string]string{io.kubernet
es.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-618522,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 814b02689101d7cfa34ab67b41e9b59d,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b6f98e07-547c-4e36-8b49-c8917deaad5e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	691f4d648fd2b       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                              2 minutes ago       Running             nginx                     0                   6b4883c8b37cf       nginx                                       default
	3a79a7075aae6       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          2 minutes ago       Running             busybox                   0                   68c49695e8e21       busybox                                     default
	052f565495724       registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad             3 minutes ago       Running             controller                0                   5155eb89959d2       ingress-nginx-controller-85d4c799dd-kqfmh   ingress-nginx
	d150608cd68e0       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285   3 minutes ago       Exited              patch                     0                   d6bb7cc589139       ingress-nginx-admission-patch-z9k7w         ingress-nginx
	b561a47833358       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285   3 minutes ago       Exited              create                    0                   3ea76a13c4ee4       ingress-nginx-admission-create-4lxk7        ingress-nginx
	83955f1142946       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             3 minutes ago       Running             local-path-provisioner    0                   e3dfd570c3797       local-path-provisioner-648f6765c9-qhj42     local-path-storage
	af2310b5caf20       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               3 minutes ago       Running             minikube-ingress-dns      0                   e78546dbd1eb5       kube-ingress-dns-minikube                   kube-system
	5565bf8b9a193       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     4 minutes ago       Running             amd-gpu-device-plugin     0                   0b4cdbdbe9bc1       amd-gpu-device-plugin-2k5hq                 kube-system
	5196916f6baf4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   50198ca0b5791       storage-provisioner                         kube-system
	8e6332cba2baf       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             4 minutes ago       Running             coredns                   0                   5974b2450b9ee       coredns-66bc5c9577-7c7k7                    kube-system
	34ae256e98e16       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                                             4 minutes ago       Running             kube-proxy                0                   3c711959e72ff       kube-proxy-g62jv                            kube-system
	41d417bd2e46f       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                             4 minutes ago       Running             etcd                      0                   f0f8dfdcd4309       etcd-addons-618522                          kube-system
	e0775959bef83       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                                             4 minutes ago       Running             kube-controller-manager   0                   49bf54d6e2f64       kube-controller-manager-addons-618522       kube-system
	f4b72836806c7       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                                             4 minutes ago       Running             kube-scheduler            0                   981203d6ff56d       kube-scheduler-addons-618522                kube-system
	ad4e3637fc7fb       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                                             4 minutes ago       Running             kube-apiserver            0                   5b338c173ba94       kube-apiserver-addons-618522                kube-system
	
	
	==> coredns [8e6332cba2bafb17156e934f13eca1d36e74d75167c2a8796e3d86e89b9ff06e] <==
	[INFO] 10.244.0.8:41605 - 62879 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000145374s
	[INFO] 10.244.0.8:41605 - 60079 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000265763s
	[INFO] 10.244.0.8:41605 - 13447 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000612414s
	[INFO] 10.244.0.8:41605 - 52694 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000115545s
	[INFO] 10.244.0.8:41605 - 50711 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.00011907s
	[INFO] 10.244.0.8:41605 - 13916 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000118279s
	[INFO] 10.244.0.8:41605 - 33962 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000319268s
	[INFO] 10.244.0.8:36189 - 38398 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000194412s
	[INFO] 10.244.0.8:36189 - 38104 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000179395s
	[INFO] 10.244.0.8:35053 - 18874 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000097769s
	[INFO] 10.244.0.8:35053 - 18578 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000321563s
	[INFO] 10.244.0.8:41902 - 25698 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000123749s
	[INFO] 10.244.0.8:41902 - 25464 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000208565s
	[INFO] 10.244.0.8:57983 - 20478 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000113851s
	[INFO] 10.244.0.8:57983 - 20029 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000332087s
	[INFO] 10.244.0.23:60166 - 36512 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000452489s
	[INFO] 10.244.0.23:41738 - 39045 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000420528s
	[INFO] 10.244.0.23:46380 - 63929 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000151944s
	[INFO] 10.244.0.23:53475 - 47117 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000075569s
	[INFO] 10.244.0.23:46177 - 26486 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000406063s
	[INFO] 10.244.0.23:52294 - 9288 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000463933s
	[INFO] 10.244.0.23:48882 - 12403 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.001689352s
	[INFO] 10.244.0.23:57778 - 48105 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003651804s
	[INFO] 10.244.0.27:37196 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000822542s
	[INFO] 10.244.0.27:35920 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000348083s
	
	
	==> describe nodes <==
	Name:               addons-618522
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-618522
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9c863e42b877bb840aec81dfcdcbf173a0ac5fb9
	                    minikube.k8s.io/name=addons-618522
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_06T08_29_44_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-618522
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 06 Dec 2025 08:29:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-618522
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 06 Dec 2025 08:34:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 06 Dec 2025 08:32:46 +0000   Sat, 06 Dec 2025 08:29:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 06 Dec 2025 08:32:46 +0000   Sat, 06 Dec 2025 08:29:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 06 Dec 2025 08:32:46 +0000   Sat, 06 Dec 2025 08:29:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 06 Dec 2025 08:32:46 +0000   Sat, 06 Dec 2025 08:29:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.168
	  Hostname:    addons-618522
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001784Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001784Ki
	  pods:               110
	System Info:
	  Machine ID:                 57f399ccdddf4d4fb1dfb1180b83c0f4
	  System UUID:                57f399cc-dddf-4d4f-b1df-b1180b83c0f4
	  Boot ID:                    b0ebd717-1090-4118-8f89-05a31099270d
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m54s
	  default                     hello-world-app-5d498dc89-q49v8              0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m25s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-kqfmh    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m22s
	  kube-system                 amd-gpu-device-plugin-2k5hq                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m27s
	  kube-system                 coredns-66bc5c9577-7c7k7                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m31s
	  kube-system                 etcd-addons-618522                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m36s
	  kube-system                 kube-apiserver-addons-618522                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m36s
	  kube-system                 kube-controller-manager-addons-618522        200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m36s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m25s
	  kube-system                 kube-proxy-g62jv                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m31s
	  kube-system                 kube-scheduler-addons-618522                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m36s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m24s
	  local-path-storage          local-path-provisioner-648f6765c9-qhj42      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m28s                  kube-proxy       
	  Normal  Starting                 4m43s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m43s (x8 over 4m43s)  kubelet          Node addons-618522 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m43s (x8 over 4m43s)  kubelet          Node addons-618522 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m43s (x7 over 4m43s)  kubelet          Node addons-618522 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m43s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m36s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m36s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m36s                  kubelet          Node addons-618522 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m36s                  kubelet          Node addons-618522 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m36s                  kubelet          Node addons-618522 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m35s                  kubelet          Node addons-618522 status is now: NodeReady
	  Normal  RegisteredNode           4m32s                  node-controller  Node addons-618522 event: Registered Node addons-618522 in Controller
	
	
	==> dmesg <==
	[Dec 6 08:30] kauditd_printk_skb: 356 callbacks suppressed
	[  +4.671273] kauditd_printk_skb: 326 callbacks suppressed
	[  +7.214528] kauditd_printk_skb: 5 callbacks suppressed
	[  +8.519940] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.241905] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.978909] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.377501] kauditd_printk_skb: 11 callbacks suppressed
	[  +6.493161] kauditd_printk_skb: 116 callbacks suppressed
	[Dec 6 08:31] kauditd_printk_skb: 61 callbacks suppressed
	[  +0.292820] kauditd_printk_skb: 205 callbacks suppressed
	[  +6.677761] kauditd_printk_skb: 31 callbacks suppressed
	[  +5.591303] kauditd_printk_skb: 32 callbacks suppressed
	[  +0.000065] kauditd_printk_skb: 41 callbacks suppressed
	[ +14.992772] kauditd_printk_skb: 53 callbacks suppressed
	[  +6.097038] kauditd_printk_skb: 22 callbacks suppressed
	[  +6.036442] kauditd_printk_skb: 38 callbacks suppressed
	[  +0.000070] kauditd_printk_skb: 93 callbacks suppressed
	[Dec 6 08:32] kauditd_printk_skb: 119 callbacks suppressed
	[  +3.116849] kauditd_printk_skb: 101 callbacks suppressed
	[  +2.821228] kauditd_printk_skb: 113 callbacks suppressed
	[  +1.795502] kauditd_printk_skb: 112 callbacks suppressed
	[ +12.200056] kauditd_printk_skb: 25 callbacks suppressed
	[  +0.000378] kauditd_printk_skb: 10 callbacks suppressed
	[  +6.843624] kauditd_printk_skb: 41 callbacks suppressed
	[Dec 6 08:34] kauditd_printk_skb: 127 callbacks suppressed
	
	
	==> etcd [41d417bd2e46f5280d633688820c441dcb6a2fef5b1b82d8be3d18480913bbb3] <==
	{"level":"info","ts":"2025-12-06T08:30:47.816003Z","caller":"traceutil/trace.go:172","msg":"trace[461106243] transaction","detail":"{read_only:false; response_revision:1027; number_of_response:1; }","duration":"149.640937ms","start":"2025-12-06T08:30:47.666341Z","end":"2025-12-06T08:30:47.815982Z","steps":["trace[461106243] 'process raft request'  (duration: 149.51733ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-06T08:31:06.123681Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"127.928678ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2025-12-06T08:31:06.123739Z","caller":"traceutil/trace.go:172","msg":"trace[1435643251] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1148; }","duration":"128.012788ms","start":"2025-12-06T08:31:05.995714Z","end":"2025-12-06T08:31:06.123727Z","steps":["trace[1435643251] 'range keys from in-memory index tree'  (duration: 127.831592ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-06T08:31:06.123905Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"344.541239ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ingressclasses\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-06T08:31:06.123926Z","caller":"traceutil/trace.go:172","msg":"trace[448651121] range","detail":"{range_begin:/registry/ingressclasses; range_end:; response_count:0; response_revision:1148; }","duration":"345.687706ms","start":"2025-12-06T08:31:05.778232Z","end":"2025-12-06T08:31:06.123920Z","steps":["trace[448651121] 'range keys from in-memory index tree'  (duration: 344.501427ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-06T08:31:06.124653Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-06T08:31:05.778217Z","time spent":"345.721807ms","remote":"127.0.0.1:33388","response type":"/etcdserverpb.KV/Range","request count":0,"request size":28,"response count":0,"response size":29,"request content":"key:\"/registry/ingressclasses\" limit:1 "}
	{"level":"warn","ts":"2025-12-06T08:31:06.126743Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"205.787325ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" limit:1 ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2025-12-06T08:31:06.127911Z","caller":"traceutil/trace.go:172","msg":"trace[1474492496] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1148; }","duration":"206.950144ms","start":"2025-12-06T08:31:05.920943Z","end":"2025-12-06T08:31:06.127894Z","steps":["trace[1474492496] 'range keys from in-memory index tree'  (duration: 205.206879ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-06T08:31:06.125931Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"353.691877ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/gcp-auth-certs-patch-ptprl\" limit:1 ","response":"range_response_count:1 size:4045"}
	{"level":"info","ts":"2025-12-06T08:31:06.128563Z","caller":"traceutil/trace.go:172","msg":"trace[217878058] range","detail":"{range_begin:/registry/pods/gcp-auth/gcp-auth-certs-patch-ptprl; range_end:; response_count:1; response_revision:1148; }","duration":"356.326034ms","start":"2025-12-06T08:31:05.772225Z","end":"2025-12-06T08:31:06.128551Z","steps":["trace[217878058] 'range keys from in-memory index tree'  (duration: 348.898398ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-06T08:31:06.129182Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-06T08:31:05.772209Z","time spent":"356.669567ms","remote":"127.0.0.1:33166","response type":"/etcdserverpb.KV/Range","request count":0,"request size":54,"response count":1,"response size":4069,"request content":"key:\"/registry/pods/gcp-auth/gcp-auth-certs-patch-ptprl\" limit:1 "}
	{"level":"info","ts":"2025-12-06T08:31:11.502610Z","caller":"traceutil/trace.go:172","msg":"trace[1005952380] transaction","detail":"{read_only:false; response_revision:1162; number_of_response:1; }","duration":"103.268416ms","start":"2025-12-06T08:31:11.399327Z","end":"2025-12-06T08:31:11.502595Z","steps":["trace[1005952380] 'process raft request'  (duration: 103.055405ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-06T08:31:12.524876Z","caller":"traceutil/trace.go:172","msg":"trace[1370422975] linearizableReadLoop","detail":"{readStateIndex:1193; appliedIndex:1193; }","duration":"186.965546ms","start":"2025-12-06T08:31:12.337892Z","end":"2025-12-06T08:31:12.524858Z","steps":["trace[1370422975] 'read index received'  (duration: 186.956669ms)","trace[1370422975] 'applied index is now lower than readState.Index'  (duration: 7.392µs)"],"step_count":2}
	{"level":"info","ts":"2025-12-06T08:31:12.525015Z","caller":"traceutil/trace.go:172","msg":"trace[1889658527] transaction","detail":"{read_only:false; response_revision:1163; number_of_response:1; }","duration":"294.091785ms","start":"2025-12-06T08:31:12.230906Z","end":"2025-12-06T08:31:12.524998Z","steps":["trace[1889658527] 'process raft request'  (duration: 293.971793ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-06T08:31:12.526087Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"188.290224ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.39.168\" limit:1 ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2025-12-06T08:31:12.526224Z","caller":"traceutil/trace.go:172","msg":"trace[698991661] range","detail":"{range_begin:/registry/masterleases/192.168.39.168; range_end:; response_count:1; response_revision:1163; }","duration":"188.437754ms","start":"2025-12-06T08:31:12.337777Z","end":"2025-12-06T08:31:12.526215Z","steps":["trace[698991661] 'agreement among raft nodes before linearized reading'  (duration: 187.355379ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-06T08:31:53.014153Z","caller":"traceutil/trace.go:172","msg":"trace[964558954] transaction","detail":"{read_only:false; response_revision:1377; number_of_response:1; }","duration":"229.271606ms","start":"2025-12-06T08:31:52.784866Z","end":"2025-12-06T08:31:53.014137Z","steps":["trace[964558954] 'process raft request'  (duration: 229.17549ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-06T08:31:53.266687Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"119.218928ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-06T08:31:53.266760Z","caller":"traceutil/trace.go:172","msg":"trace[905094565] range","detail":"{range_begin:/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset; range_end:; response_count:0; response_revision:1379; }","duration":"119.329759ms","start":"2025-12-06T08:31:53.147419Z","end":"2025-12-06T08:31:53.266749Z","steps":["trace[905094565] 'agreement among raft nodes before linearized reading'  (duration: 39.29188ms)","trace[905094565] 'range keys from in-memory index tree'  (duration: 79.903077ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-06T08:31:53.267483Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"112.729148ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/nvidia-device-plugin-daemonset-mgdnq\" limit:1 ","response":"range_response_count:1 size:4478"}
	{"level":"info","ts":"2025-12-06T08:31:53.267512Z","caller":"traceutil/trace.go:172","msg":"trace[358248697] range","detail":"{range_begin:/registry/pods/kube-system/nvidia-device-plugin-daemonset-mgdnq; range_end:; response_count:1; response_revision:1380; }","duration":"112.7655ms","start":"2025-12-06T08:31:53.154739Z","end":"2025-12-06T08:31:53.267505Z","steps":["trace[358248697] 'agreement among raft nodes before linearized reading'  (duration: 111.933915ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-06T08:31:53.268140Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"114.965964ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllerrevisions/kube-system/nvidia-device-plugin-daemonset-9884d469d\" limit:1 ","response":"range_response_count:1 size:2898"}
	{"level":"info","ts":"2025-12-06T08:31:53.268164Z","caller":"traceutil/trace.go:172","msg":"trace[2030691538] range","detail":"{range_begin:/registry/controllerrevisions/kube-system/nvidia-device-plugin-daemonset-9884d469d; range_end:; response_count:1; response_revision:1380; }","duration":"114.995569ms","start":"2025-12-06T08:31:53.153163Z","end":"2025-12-06T08:31:53.268159Z","steps":["trace[2030691538] 'agreement among raft nodes before linearized reading'  (duration: 113.845207ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-06T08:31:53.267768Z","caller":"traceutil/trace.go:172","msg":"trace[1567190529] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1380; }","duration":"126.591674ms","start":"2025-12-06T08:31:53.141169Z","end":"2025-12-06T08:31:53.267760Z","steps":["trace[1567190529] 'process raft request'  (duration: 45.575157ms)","trace[1567190529] 'compare'  (duration: 79.749235ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-06T08:32:25.434353Z","caller":"traceutil/trace.go:172","msg":"trace[1383362334] transaction","detail":"{read_only:false; response_revision:1686; number_of_response:1; }","duration":"154.90819ms","start":"2025-12-06T08:32:25.279262Z","end":"2025-12-06T08:32:25.434171Z","steps":["trace[1383362334] 'process raft request'  (duration: 154.729275ms)"],"step_count":1}
	
	
	==> kernel <==
	 08:34:20 up 5 min,  0 users,  load average: 0.77, 1.31, 0.66
	Linux addons-618522 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Dec  4 13:30:13 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [ad4e3637fc7fbd9a55214bed416a53f59f65b4efa5a8a55e1a5bf335b334a60b] <==
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1206 08:30:35.293564       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.97.33:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.97.33:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.97.33:443: connect: connection refused" logger="UnhandledError"
	E1206 08:30:35.297776       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.97.33:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.97.33:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.97.33:443: connect: connection refused" logger="UnhandledError"
	I1206 08:30:35.367172       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1206 08:31:37.548687       1 conn.go:339] Error on socket receive: read tcp 192.168.39.168:8443->192.168.39.1:60400: use of closed network connection
	E1206 08:31:37.750344       1 conn.go:339] Error on socket receive: read tcp 192.168.39.168:8443->192.168.39.1:60438: use of closed network connection
	I1206 08:31:47.113884       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.108.119.18"}
	I1206 08:31:54.059525       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1206 08:31:54.245867       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.111.194.221"}
	I1206 08:32:32.662773       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1206 08:32:36.314981       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1206 08:33:00.938069       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1206 08:33:00.938144       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1206 08:33:00.982692       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1206 08:33:00.982980       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1206 08:33:01.011185       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1206 08:33:01.011248       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1206 08:33:01.043125       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1206 08:33:01.044294       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1206 08:33:01.990748       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1206 08:33:02.045849       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W1206 08:33:02.179714       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I1206 08:34:18.485578       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.101.103.133"}
	
	
	==> kube-controller-manager [e0775959bef832653f048d0e59bc08f7c21e92bb187e7962c94eb2ff697c8d00] <==
	E1206 08:33:09.923515       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1206 08:33:12.444389       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1206 08:33:12.445642       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1206 08:33:16.269063       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1206 08:33:16.270387       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1206 08:33:16.944164       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1206 08:33:16.945317       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	I1206 08:33:17.752146       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1206 08:33:17.752264       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1206 08:33:17.812927       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1206 08:33:17.812991       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1206 08:33:22.206839       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1206 08:33:22.207878       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1206 08:33:29.557846       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1206 08:33:29.559242       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1206 08:33:37.862386       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1206 08:33:37.863580       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1206 08:33:45.309875       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1206 08:33:45.311250       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1206 08:34:15.746241       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1206 08:34:15.747262       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1206 08:34:16.369034       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1206 08:34:16.370355       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1206 08:34:18.319309       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1206 08:34:18.320896       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [34ae256e98e16bd81ec742594553ccc43e3e85a83aea2763892cbc386f010836] <==
	I1206 08:29:50.860737       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1206 08:29:50.963197       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1206 08:29:50.964491       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.168"]
	E1206 08:29:50.964593       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 08:29:51.106941       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1206 08:29:51.107978       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1206 08:29:51.109002       1 server_linux.go:132] "Using iptables Proxier"
	I1206 08:29:51.162131       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 08:29:51.163229       1 server.go:527] "Version info" version="v1.34.2"
	I1206 08:29:51.163261       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 08:29:51.172561       1 config.go:200] "Starting service config controller"
	I1206 08:29:51.172596       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 08:29:51.172611       1 config.go:106] "Starting endpoint slice config controller"
	I1206 08:29:51.172614       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 08:29:51.172622       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 08:29:51.172625       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 08:29:51.176024       1 config.go:309] "Starting node config controller"
	I1206 08:29:51.176123       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 08:29:51.176130       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 08:29:51.273384       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1206 08:29:51.273429       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1206 08:29:51.273450       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [f4b72836806c7676ea45017355553dd89e24109180f8bb53dfa55d87f396a817] <==
	E1206 08:29:40.632648       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1206 08:29:40.632726       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1206 08:29:40.632866       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1206 08:29:40.632959       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1206 08:29:40.632985       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1206 08:29:40.633018       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1206 08:29:40.633042       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1206 08:29:40.633075       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1206 08:29:40.633100       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1206 08:29:40.633130       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1206 08:29:40.633226       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1206 08:29:40.633254       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1206 08:29:40.633621       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1206 08:29:41.443380       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1206 08:29:41.508904       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1206 08:29:41.536483       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1206 08:29:41.664242       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1206 08:29:41.671308       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1206 08:29:41.689080       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1206 08:29:41.733670       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1206 08:29:41.796837       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1206 08:29:41.830084       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1206 08:29:41.888351       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1206 08:29:42.091981       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1206 08:29:43.821965       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 06 08:33:04 addons-618522 kubelet[1500]: I1206 08:33:04.067833    1500 scope.go:117] "RemoveContainer" containerID="d88d1d6bd8a47c4ebd59f78086729cc86d8a34a5787cae7136274a0e8df3ccd4"
	Dec 06 08:33:04 addons-618522 kubelet[1500]: I1206 08:33:04.068650    1500 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d88d1d6bd8a47c4ebd59f78086729cc86d8a34a5787cae7136274a0e8df3ccd4"} err="failed to get container status \"d88d1d6bd8a47c4ebd59f78086729cc86d8a34a5787cae7136274a0e8df3ccd4\": rpc error: code = NotFound desc = could not find container \"d88d1d6bd8a47c4ebd59f78086729cc86d8a34a5787cae7136274a0e8df3ccd4\": container with ID starting with d88d1d6bd8a47c4ebd59f78086729cc86d8a34a5787cae7136274a0e8df3ccd4 not found: ID does not exist"
	Dec 06 08:33:04 addons-618522 kubelet[1500]: I1206 08:33:04.068667    1500 scope.go:117] "RemoveContainer" containerID="ca7aafe138914350d006b3f471220a8628f9bcb9a052f4c553e386742eb22fdb"
	Dec 06 08:33:04 addons-618522 kubelet[1500]: I1206 08:33:04.069929    1500 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca7aafe138914350d006b3f471220a8628f9bcb9a052f4c553e386742eb22fdb"} err="failed to get container status \"ca7aafe138914350d006b3f471220a8628f9bcb9a052f4c553e386742eb22fdb\": rpc error: code = NotFound desc = could not find container \"ca7aafe138914350d006b3f471220a8628f9bcb9a052f4c553e386742eb22fdb\": container with ID starting with ca7aafe138914350d006b3f471220a8628f9bcb9a052f4c553e386742eb22fdb not found: ID does not exist"
	Dec 06 08:33:04 addons-618522 kubelet[1500]: I1206 08:33:04.069947    1500 scope.go:117] "RemoveContainer" containerID="f6053f43b5805d6e017644cd8d7e735cd9c42b7d69a9539e222dd389a1b68563"
	Dec 06 08:33:04 addons-618522 kubelet[1500]: I1206 08:33:04.189338    1500 scope.go:117] "RemoveContainer" containerID="f6053f43b5805d6e017644cd8d7e735cd9c42b7d69a9539e222dd389a1b68563"
	Dec 06 08:33:04 addons-618522 kubelet[1500]: E1206 08:33:04.190097    1500 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6053f43b5805d6e017644cd8d7e735cd9c42b7d69a9539e222dd389a1b68563\": container with ID starting with f6053f43b5805d6e017644cd8d7e735cd9c42b7d69a9539e222dd389a1b68563 not found: ID does not exist" containerID="f6053f43b5805d6e017644cd8d7e735cd9c42b7d69a9539e222dd389a1b68563"
	Dec 06 08:33:04 addons-618522 kubelet[1500]: I1206 08:33:04.190147    1500 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6053f43b5805d6e017644cd8d7e735cd9c42b7d69a9539e222dd389a1b68563"} err="failed to get container status \"f6053f43b5805d6e017644cd8d7e735cd9c42b7d69a9539e222dd389a1b68563\": rpc error: code = NotFound desc = could not find container \"f6053f43b5805d6e017644cd8d7e735cd9c42b7d69a9539e222dd389a1b68563\": container with ID starting with f6053f43b5805d6e017644cd8d7e735cd9c42b7d69a9539e222dd389a1b68563 not found: ID does not exist"
	Dec 06 08:33:13 addons-618522 kubelet[1500]: E1206 08:33:13.772311    1500 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765009993771732090 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
	Dec 06 08:33:13 addons-618522 kubelet[1500]: E1206 08:33:13.772424    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765009993771732090 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
	Dec 06 08:33:23 addons-618522 kubelet[1500]: E1206 08:33:23.776610    1500 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765010003776084977 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
	Dec 06 08:33:23 addons-618522 kubelet[1500]: E1206 08:33:23.777011    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765010003776084977 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
	Dec 06 08:33:26 addons-618522 kubelet[1500]: I1206 08:33:26.390663    1500 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-2k5hq" secret="" err="secret \"gcp-auth\" not found"
	Dec 06 08:33:33 addons-618522 kubelet[1500]: E1206 08:33:33.780044    1500 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765010013779460154 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
	Dec 06 08:33:33 addons-618522 kubelet[1500]: E1206 08:33:33.780093    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765010013779460154 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
	Dec 06 08:33:43 addons-618522 kubelet[1500]: E1206 08:33:43.783551    1500 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765010023783114510 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
	Dec 06 08:33:43 addons-618522 kubelet[1500]: E1206 08:33:43.783579    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765010023783114510 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
	Dec 06 08:33:53 addons-618522 kubelet[1500]: E1206 08:33:53.788002    1500 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765010033787409988 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
	Dec 06 08:33:53 addons-618522 kubelet[1500]: E1206 08:33:53.788052    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765010033787409988 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
	Dec 06 08:34:03 addons-618522 kubelet[1500]: E1206 08:34:03.792924    1500 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765010043791491294 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
	Dec 06 08:34:03 addons-618522 kubelet[1500]: E1206 08:34:03.792975    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765010043791491294 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
	Dec 06 08:34:13 addons-618522 kubelet[1500]: E1206 08:34:13.795758    1500 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765010053795279611 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
	Dec 06 08:34:13 addons-618522 kubelet[1500]: E1206 08:34:13.796262    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765010053795279611 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
	Dec 06 08:34:16 addons-618522 kubelet[1500]: I1206 08:34:16.390965    1500 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Dec 06 08:34:18 addons-618522 kubelet[1500]: I1206 08:34:18.421109    1500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmk66\" (UniqueName: \"kubernetes.io/projected/ecbf3cf2-44ce-4aec-8f0f-5b2f8ef852b6-kube-api-access-bmk66\") pod \"hello-world-app-5d498dc89-q49v8\" (UID: \"ecbf3cf2-44ce-4aec-8f0f-5b2f8ef852b6\") " pod="default/hello-world-app-5d498dc89-q49v8"
	
	
	==> storage-provisioner [5196916f6baf47b445b94972b7e511739075008df75b03baf2c42ddc38d8b404] <==
	W1206 08:33:55.990027       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:33:57.994688       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:33:58.000702       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:34:00.003640       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:34:00.011749       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:34:02.015504       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:34:02.023560       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:34:04.028448       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:34:04.035652       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:34:06.039451       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:34:06.048741       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:34:08.052108       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:34:08.060531       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:34:10.064029       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:34:10.069881       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:34:12.074420       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:34:12.083709       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:34:14.087593       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:34:14.095594       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:34:16.099566       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:34:16.106775       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:34:18.111649       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:34:18.117834       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:34:20.122653       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:34:20.131771       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-618522 -n addons-618522
helpers_test.go:269: (dbg) Run:  kubectl --context addons-618522 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-q49v8 ingress-nginx-admission-create-4lxk7 ingress-nginx-admission-patch-z9k7w
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-618522 describe pod hello-world-app-5d498dc89-q49v8 ingress-nginx-admission-create-4lxk7 ingress-nginx-admission-patch-z9k7w
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-618522 describe pod hello-world-app-5d498dc89-q49v8 ingress-nginx-admission-create-4lxk7 ingress-nginx-admission-patch-z9k7w: exit status 1 (87.148474ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-q49v8
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-618522/192.168.39.168
	Start Time:       Sat, 06 Dec 2025 08:34:18 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bmk66 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-bmk66:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-q49v8 to addons-618522
	  Normal  Pulling    1s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-4lxk7" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-z9k7w" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-618522 describe pod hello-world-app-5d498dc89-q49v8 ingress-nginx-admission-create-4lxk7 ingress-nginx-admission-patch-z9k7w: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-618522 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-618522 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-618522 addons disable ingress --alsologtostderr -v=1: (7.827608362s)
--- FAIL: TestAddons/parallel/Ingress (155.82s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (370.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [8c9d42e1-2ff4-4803-a2a9-34cd247885dd] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.005338884s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-171063 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-171063 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-171063 get pvc myclaim -o=json
I1206 08:40:03.028837    9552 retry.go:31] will retry after 1.835461108s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:e7183a5e-36e9-4b4a-bfac-bf22c3f13aaf ResourceVersion:771 Generation:0 CreationTimestamp:2025-12-06 08:40:02 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001bba720 VolumeMode:0xc001bba730 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-171063 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-171063 apply -f testdata/storage-provisioner/pod.yaml
I1206 08:40:05.286490    9552 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [7dc40bb8-625b-41b2-b29e-daa1a498bd02] Pending
helpers_test.go:352: "sp-pod" [7dc40bb8-625b-41b2-b29e-daa1a498bd02] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "default" "test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:140: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 6m0s: context deadline exceeded ****
functional_test_pvc_test.go:140: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-171063 -n functional-171063
functional_test_pvc_test.go:140: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-12-06 08:46:05.513066194 +0000 UTC m=+1073.441689115
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-171063 describe po sp-pod -n default
functional_test_pvc_test.go:140: (dbg) kubectl --context functional-171063 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-171063/192.168.39.67
Start Time:       Sat, 06 Dec 2025 08:40:05 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               
IPs:              <none>
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ContainerCreating
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-m26dg (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   False 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-m26dg:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type    Reason     Age   From               Message
----    ------     ----  ----               -------
Normal  Scheduled  6m    default-scheduler  Successfully assigned default/sp-pod to functional-171063
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-171063 logs sp-pod -n default
functional_test_pvc_test.go:140: (dbg) Non-zero exit: kubectl --context functional-171063 logs sp-pod -n default: exit status 1 (77.477972ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: ContainerCreating

                                                
                                                
** /stderr **
functional_test_pvc_test.go:140: kubectl --context functional-171063 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:141: failed waiting for pvctest pod : test=storage-provisioner within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-171063 -n functional-171063
helpers_test.go:252: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-171063 logs -n 25: (1.453178802s)
helpers_test.go:260: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                               ARGS                                                                │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ update-context │ functional-171063 update-context --alsologtostderr -v=2                                                                           │ functional-171063 │ jenkins │ v1.37.0 │ 06 Dec 25 08:40 UTC │ 06 Dec 25 08:40 UTC │
	│ update-context │ functional-171063 update-context --alsologtostderr -v=2                                                                           │ functional-171063 │ jenkins │ v1.37.0 │ 06 Dec 25 08:40 UTC │ 06 Dec 25 08:40 UTC │
	│ image          │ functional-171063 image ls --format short --alsologtostderr                                                                       │ functional-171063 │ jenkins │ v1.37.0 │ 06 Dec 25 08:40 UTC │ 06 Dec 25 08:40 UTC │
	│ image          │ functional-171063 image ls --format yaml --alsologtostderr                                                                        │ functional-171063 │ jenkins │ v1.37.0 │ 06 Dec 25 08:40 UTC │ 06 Dec 25 08:40 UTC │
	│ ssh            │ functional-171063 ssh pgrep buildkitd                                                                                             │ functional-171063 │ jenkins │ v1.37.0 │ 06 Dec 25 08:40 UTC │                     │
	│ image          │ functional-171063 image build -t localhost/my-image:functional-171063 testdata/build --alsologtostderr                            │ functional-171063 │ jenkins │ v1.37.0 │ 06 Dec 25 08:40 UTC │ 06 Dec 25 08:40 UTC │
	│ ssh            │ functional-171063 ssh stat /mount-9p/created-by-test                                                                              │ functional-171063 │ jenkins │ v1.37.0 │ 06 Dec 25 08:40 UTC │ 06 Dec 25 08:40 UTC │
	│ ssh            │ functional-171063 ssh stat /mount-9p/created-by-pod                                                                               │ functional-171063 │ jenkins │ v1.37.0 │ 06 Dec 25 08:40 UTC │ 06 Dec 25 08:40 UTC │
	│ ssh            │ functional-171063 ssh sudo umount -f /mount-9p                                                                                    │ functional-171063 │ jenkins │ v1.37.0 │ 06 Dec 25 08:40 UTC │ 06 Dec 25 08:40 UTC │
	│ ssh            │ functional-171063 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-171063 │ jenkins │ v1.37.0 │ 06 Dec 25 08:40 UTC │                     │
	│ mount          │ -p functional-171063 /tmp/TestFunctionalparallelMountCmdspecific-port3725818201/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-171063 │ jenkins │ v1.37.0 │ 06 Dec 25 08:40 UTC │                     │
	│ image          │ functional-171063 image ls                                                                                                        │ functional-171063 │ jenkins │ v1.37.0 │ 06 Dec 25 08:40 UTC │ 06 Dec 25 08:40 UTC │
	│ ssh            │ functional-171063 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-171063 │ jenkins │ v1.37.0 │ 06 Dec 25 08:40 UTC │ 06 Dec 25 08:40 UTC │
	│ image          │ functional-171063 image ls --format json --alsologtostderr                                                                        │ functional-171063 │ jenkins │ v1.37.0 │ 06 Dec 25 08:40 UTC │ 06 Dec 25 08:40 UTC │
	│ ssh            │ functional-171063 ssh -- ls -la /mount-9p                                                                                         │ functional-171063 │ jenkins │ v1.37.0 │ 06 Dec 25 08:40 UTC │ 06 Dec 25 08:40 UTC │
	│ image          │ functional-171063 image ls --format table --alsologtostderr                                                                       │ functional-171063 │ jenkins │ v1.37.0 │ 06 Dec 25 08:40 UTC │ 06 Dec 25 08:40 UTC │
	│ ssh            │ functional-171063 ssh sudo umount -f /mount-9p                                                                                    │ functional-171063 │ jenkins │ v1.37.0 │ 06 Dec 25 08:40 UTC │                     │
	│ mount          │ -p functional-171063 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3991001899/001:/mount2 --alsologtostderr -v=1                │ functional-171063 │ jenkins │ v1.37.0 │ 06 Dec 25 08:40 UTC │                     │
	│ mount          │ -p functional-171063 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3991001899/001:/mount3 --alsologtostderr -v=1                │ functional-171063 │ jenkins │ v1.37.0 │ 06 Dec 25 08:40 UTC │                     │
	│ ssh            │ functional-171063 ssh findmnt -T /mount1                                                                                          │ functional-171063 │ jenkins │ v1.37.0 │ 06 Dec 25 08:40 UTC │                     │
	│ mount          │ -p functional-171063 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3991001899/001:/mount1 --alsologtostderr -v=1                │ functional-171063 │ jenkins │ v1.37.0 │ 06 Dec 25 08:40 UTC │                     │
	│ ssh            │ functional-171063 ssh findmnt -T /mount1                                                                                          │ functional-171063 │ jenkins │ v1.37.0 │ 06 Dec 25 08:40 UTC │ 06 Dec 25 08:40 UTC │
	│ ssh            │ functional-171063 ssh findmnt -T /mount2                                                                                          │ functional-171063 │ jenkins │ v1.37.0 │ 06 Dec 25 08:40 UTC │ 06 Dec 25 08:40 UTC │
	│ ssh            │ functional-171063 ssh findmnt -T /mount3                                                                                          │ functional-171063 │ jenkins │ v1.37.0 │ 06 Dec 25 08:40 UTC │ 06 Dec 25 08:40 UTC │
	│ mount          │ -p functional-171063 --kill=true                                                                                                  │ functional-171063 │ jenkins │ v1.37.0 │ 06 Dec 25 08:40 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 08:40:20
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 08:40:20.883084   15929 out.go:360] Setting OutFile to fd 1 ...
	I1206 08:40:20.883211   15929 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:40:20.883221   15929 out.go:374] Setting ErrFile to fd 2...
	I1206 08:40:20.883225   15929 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:40:20.883458   15929 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5603/.minikube/bin
	I1206 08:40:20.883882   15929 out.go:368] Setting JSON to false
	I1206 08:40:20.884812   15929 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":1363,"bootTime":1765009058,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 08:40:20.884871   15929 start.go:143] virtualization: kvm guest
	I1206 08:40:20.886827   15929 out.go:179] * [functional-171063] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 08:40:20.888538   15929 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 08:40:20.888537   15929 notify.go:221] Checking for updates...
	I1206 08:40:20.891448   15929 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 08:40:20.892704   15929 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-5603/kubeconfig
	I1206 08:40:20.893807   15929 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-5603/.minikube
	I1206 08:40:20.894996   15929 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 08:40:20.896088   15929 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 08:40:20.897724   15929 config.go:182] Loaded profile config "functional-171063": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 08:40:20.898459   15929 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 08:40:20.935626   15929 out.go:179] * Using the kvm2 driver based on existing profile
	I1206 08:40:20.936681   15929 start.go:309] selected driver: kvm2
	I1206 08:40:20.936695   15929 start.go:927] validating driver "kvm2" against &{Name:functional-171063 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:functional-171063 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 08:40:20.936782   15929 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 08:40:20.937636   15929 cni.go:84] Creating CNI manager for ""
	I1206 08:40:20.937708   15929 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 08:40:20.937760   15929 start.go:353] cluster config:
	{Name:functional-171063 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-171063 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 08:40:20.939936   15929 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Dec 06 08:46:06 functional-171063 crio[5491]: time="2025-12-06 08:46:06.322933874Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765010766322912040,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:239993,},InodesUsed:&UInt64Value{Value:111,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9b0a6bf2-4bcf-4ab8-ae6f-d5855fd037f6 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 08:46:06 functional-171063 crio[5491]: time="2025-12-06 08:46:06.324392981Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=18449254-3abb-4c12-ac85-674b5e87e2cf name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 08:46:06 functional-171063 crio[5491]: time="2025-12-06 08:46:06.324448098Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=18449254-3abb-4c12-ac85-674b5e87e2cf name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 08:46:06 functional-171063 crio[5491]: time="2025-12-06 08:46:06.325217562Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:253d022ae1c232815c306567d52913d516861e5ef897f8684bae6b817868dced,PodSandboxId:b5f08f5c061748adaf4a8ba770d20abfca0e0b78d98fa8ae7d23a03566d267db,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1765010435533902802,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-77bf4d6c4c-k2jrd,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 344f4941-6c6e-4084-8282-4e9c23bb7670,},Annotations:map[string]string{io.kub
ernetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0770b78c153fffcc576a5fe0b8bb64ffc1459c2b1f4e512493bda405b402974c,PodSandboxId:36ec8310648f2c9f332a0d8cd924ae35e64bc296f8cac42d467d80a0ede6a048,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1765010432413702148,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-855c9754f9-26dbj,io.kubernete
s.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: b866af3b-60ec-46a5-9cc0-6a15f73b8acd,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2c53e3233352048354f73e24ebd59b38cec04468d76349cce715e88511004c6,PodSandboxId:bb4aa3da1f9bec2092d290f1c69a5bb14c36dabcb9847ca8f250fa60ab3332bf,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1765010424763523521,Labels:map[string]string{io.
kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0e7b1705-1df4-4cca-b820-36c9e4bb31ff,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fde0cbd25fd8a15bc188967ff383714271dd892a480bc1e97dc73c91b0680201,PodSandboxId:c155d30d2980941f6df7485d7973b8c3b3d307dbae61837f2590982d02a98bc5,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1765010414347099975,Labels:map[string]string{io.
kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-75c85bcc94-9xzqr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cb5c6960-1721-4e04-828a-4979165b6abd,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25c52dd7dff23b53745badda126d05eda8dfd3d1f5c7fa964fd5d33ac44e47e6,PodSandboxId:149de353a5d6365ba43509d94099cb473a7240db5e4ee54509f8bfda6731cac9,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1765010412689013358,Labels:map[stri
ng]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-7d85dfc575-s4575,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 12ae6735-61b5-45e5-a064-29a0f6ce23e2,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a125773a38c1b21a8415b6b6f5b849b4c05b575694224d20eef3d429a126b01c,PodSandboxId:ecfe8dd4e5c8fcb68e51cc316c92d5f440104ddbc3922c0e68c5cfd7ce0cef7d,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1765010410268506150,Labels
:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-5bb876957f-ql5z4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8f961666-5410-46a1-bffc-ce0456227f36,},Annotations:map[string]string{io.kubernetes.container.hash: a60d665,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db1269cded92c4985645f17bdbda5630ad85da58774d7344c7ce58720128eab0,PodSandboxId:39e2c02185766c862f19b32f27bc034ea43042e210ae14164585f049a86ea974,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45
,State:CONTAINER_RUNNING,CreatedAt:1765010373550251166,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qdsjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ef0850f-68ef-4569-a6fc-c9b2e6c3bd92,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d879e290547c78634c4471b9506509d7d40e3bdc5d5d2ba8df972f59b6df16ac,PodSandboxId:f49b21f47fad3895e34851a2c57fd60ce3b722d5afcb3893926dcaff0dbe4007,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1
765010373567247269,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-lldj4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7a40665-e9ca-4bf4-ad57-e26599416021,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b37b58f4653d8dc77695934badaa6b648719e4fdb9a69958c7b78cbb644c7b39,PodSandboxId:1d02bc21a67c00a2f7e40feb8630c5b221bcc810fa
0b7a4259ccc44bcd6ba7f7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765010370025899709,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-171063,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19a0bd67be22aa24a6c49eae9452e26a,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fcafa881261
6185bf018ee4a9fa2accb844e99fff65ffb1ef5de42dbda51fd8,PodSandboxId:6fab5d51d6d4a382a419528019286dc0679866096538176153c4e871df7713ef,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765010369899856419,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-171063,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38a26da27013a2977bc1ff9ef01a15b3,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fi
le,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94460fb20af90dec7098bde575ad2c14afca516125b9a4ccb8755b193c945d61,PodSandboxId:5d648d9253b5a68d688923bfa6defbc74015ebcd65ef2dd6e438ce93ab914942,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765010361767977445,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c9d42e1-2ff4-4803-a2a9-34cd247885dd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubern
etes.pod.terminationGracePeriod: 30,},},&Container{Id:8e69eb2b9b7161c71462f27c76e7839c4d28eb4067326acf7e0e0c5f8e9ce50f,PodSandboxId:85a9a26f766509b115128fbde9776ddc9daeb62ea42dccc14d1a5d066130927a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765010353613131669,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-171063,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03410f98f85a94c0d1b042fa28219496,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4eb1b9d910706caea8205c03189c84984c1e3152d9c307d5820920121c52af0c,PodSandboxId:6d51dca09db23fc73e68a0d0ff5a3a646fcd56a0f3c50df69d114319c57d097c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765010348624848349,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-171063,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60e56362ea97b0aab7118e6922366fff,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe
-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a9bdd8d69e005a7aeff748396017568be772b9932067bfd1507a779f07ce049,PodSandboxId:f49b21f47fad3895e34851a2c57fd60ce3b722d5afcb3893926dcaff0dbe4007,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765010347649993205,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-lldj4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7a40665-e9ca-4bf4-ad57-e26599416021,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4316e090e85b785ac1a30511a8640b2039dbd5e618ac82080e49866d7c4fdef,PodSandboxId:39e2c02185766c862f19b32f27bc034ea43042e210ae14164585f049a86ea974,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80
f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_EXITED,CreatedAt:1765010346982404280,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qdsjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ef0850f-68ef-4569-a6fc-c9b2e6c3bd92,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3900711086fd088de5f88b00a515b8c326c94cc30cef68f88e82960223a1364e,PodSandboxId:5d648d9253b5a68d688923bfa6defbc74015ebcd65ef2dd6e438ce93ab914942,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d
867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765010346770071486,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c9d42e1-2ff4-4803-a2a9-34cd247885dd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:791e570884927817df9ee435784466b9960e6e5e938587922850173376e7c739,PodSandboxId:6fab5d51d6d4a382a419528019286dc0679866096538176153c4e871df7713ef,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Sta
te:CONTAINER_EXITED,CreatedAt:1765010346741122471,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-171063,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38a26da27013a2977bc1ff9ef01a15b3,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f75b96c3e755415c74a3b993cd8d037c0d5b3eb4da2974d796da3b571b082f4,PodSandboxId:f5d0d39fa3ec09a2372d4db68a4acdf0f2779d019154f629498a5c2e5670c654,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_EXITED,CreatedAt:1765010314257298336,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-171063,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60e56362ea97b0aab7118e6922366fff,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c4e5ef4b6130aeeeb65a6c1c274dcb4f3cdeca32dab840b9ddeb3e95258fa62,PodSandboxId:9657685460ba982192a9331437e654d36ae22b0c27bc3c058a95cb6c608f372f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&Image
Spec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_EXITED,CreatedAt:1765010314182963130,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-171063,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03410f98f85a94c0d1b042fa28219496,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=18449254-3abb-4c12-ac85-674b5e87e2cf name=/runtime.v1.RuntimeService/Li
stContainers
	Dec 06 08:46:06 functional-171063 crio[5491]: time="2025-12-06 08:46:06.366332052Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1f3ed422-4361-4da4-932c-ee3c771e8a67 name=/runtime.v1.RuntimeService/Version
	Dec 06 08:46:06 functional-171063 crio[5491]: time="2025-12-06 08:46:06.366423369Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1f3ed422-4361-4da4-932c-ee3c771e8a67 name=/runtime.v1.RuntimeService/Version
	Dec 06 08:46:06 functional-171063 crio[5491]: time="2025-12-06 08:46:06.367803966Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=76239cb0-5928-49cd-bdad-a8c509b081aa name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 08:46:06 functional-171063 crio[5491]: time="2025-12-06 08:46:06.368510408Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765010766368488856,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:239993,},InodesUsed:&UInt64Value{Value:111,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=76239cb0-5928-49cd-bdad-a8c509b081aa name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 08:46:06 functional-171063 crio[5491]: time="2025-12-06 08:46:06.369443321Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6e6a5db5-31ea-4efb-9bf5-2362b5a6638e name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 08:46:06 functional-171063 crio[5491]: time="2025-12-06 08:46:06.369519057Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6e6a5db5-31ea-4efb-9bf5-2362b5a6638e name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 08:46:06 functional-171063 crio[5491]: time="2025-12-06 08:46:06.369930613Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:253d022ae1c232815c306567d52913d516861e5ef897f8684bae6b817868dced,PodSandboxId:b5f08f5c061748adaf4a8ba770d20abfca0e0b78d98fa8ae7d23a03566d267db,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1765010435533902802,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-77bf4d6c4c-k2jrd,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 344f4941-6c6e-4084-8282-4e9c23bb7670,},Annotations:map[string]string{io.kub
ernetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0770b78c153fffcc576a5fe0b8bb64ffc1459c2b1f4e512493bda405b402974c,PodSandboxId:36ec8310648f2c9f332a0d8cd924ae35e64bc296f8cac42d467d80a0ede6a048,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1765010432413702148,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-855c9754f9-26dbj,io.kubernete
s.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: b866af3b-60ec-46a5-9cc0-6a15f73b8acd,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2c53e3233352048354f73e24ebd59b38cec04468d76349cce715e88511004c6,PodSandboxId:bb4aa3da1f9bec2092d290f1c69a5bb14c36dabcb9847ca8f250fa60ab3332bf,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1765010424763523521,Labels:map[string]string{io.
kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0e7b1705-1df4-4cca-b820-36c9e4bb31ff,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fde0cbd25fd8a15bc188967ff383714271dd892a480bc1e97dc73c91b0680201,PodSandboxId:c155d30d2980941f6df7485d7973b8c3b3d307dbae61837f2590982d02a98bc5,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1765010414347099975,Labels:map[string]string{io.
kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-75c85bcc94-9xzqr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cb5c6960-1721-4e04-828a-4979165b6abd,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25c52dd7dff23b53745badda126d05eda8dfd3d1f5c7fa964fd5d33ac44e47e6,PodSandboxId:149de353a5d6365ba43509d94099cb473a7240db5e4ee54509f8bfda6731cac9,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1765010412689013358,Labels:map[stri
ng]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-7d85dfc575-s4575,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 12ae6735-61b5-45e5-a064-29a0f6ce23e2,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a125773a38c1b21a8415b6b6f5b849b4c05b575694224d20eef3d429a126b01c,PodSandboxId:ecfe8dd4e5c8fcb68e51cc316c92d5f440104ddbc3922c0e68c5cfd7ce0cef7d,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1765010410268506150,Labels
:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-5bb876957f-ql5z4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8f961666-5410-46a1-bffc-ce0456227f36,},Annotations:map[string]string{io.kubernetes.container.hash: a60d665,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db1269cded92c4985645f17bdbda5630ad85da58774d7344c7ce58720128eab0,PodSandboxId:39e2c02185766c862f19b32f27bc034ea43042e210ae14164585f049a86ea974,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45
,State:CONTAINER_RUNNING,CreatedAt:1765010373550251166,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qdsjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ef0850f-68ef-4569-a6fc-c9b2e6c3bd92,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d879e290547c78634c4471b9506509d7d40e3bdc5d5d2ba8df972f59b6df16ac,PodSandboxId:f49b21f47fad3895e34851a2c57fd60ce3b722d5afcb3893926dcaff0dbe4007,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1
765010373567247269,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-lldj4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7a40665-e9ca-4bf4-ad57-e26599416021,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b37b58f4653d8dc77695934badaa6b648719e4fdb9a69958c7b78cbb644c7b39,PodSandboxId:1d02bc21a67c00a2f7e40feb8630c5b221bcc810fa
0b7a4259ccc44bcd6ba7f7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765010370025899709,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-171063,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19a0bd67be22aa24a6c49eae9452e26a,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fcafa881261
6185bf018ee4a9fa2accb844e99fff65ffb1ef5de42dbda51fd8,PodSandboxId:6fab5d51d6d4a382a419528019286dc0679866096538176153c4e871df7713ef,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765010369899856419,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-171063,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38a26da27013a2977bc1ff9ef01a15b3,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fi
le,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94460fb20af90dec7098bde575ad2c14afca516125b9a4ccb8755b193c945d61,PodSandboxId:5d648d9253b5a68d688923bfa6defbc74015ebcd65ef2dd6e438ce93ab914942,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765010361767977445,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c9d42e1-2ff4-4803-a2a9-34cd247885dd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubern
etes.pod.terminationGracePeriod: 30,},},&Container{Id:8e69eb2b9b7161c71462f27c76e7839c4d28eb4067326acf7e0e0c5f8e9ce50f,PodSandboxId:85a9a26f766509b115128fbde9776ddc9daeb62ea42dccc14d1a5d066130927a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765010353613131669,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-171063,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03410f98f85a94c0d1b042fa28219496,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4eb1b9d910706caea8205c03189c84984c1e3152d9c307d5820920121c52af0c,PodSandboxId:6d51dca09db23fc73e68a0d0ff5a3a646fcd56a0f3c50df69d114319c57d097c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765010348624848349,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-171063,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60e56362ea97b0aab7118e6922366fff,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe
-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a9bdd8d69e005a7aeff748396017568be772b9932067bfd1507a779f07ce049,PodSandboxId:f49b21f47fad3895e34851a2c57fd60ce3b722d5afcb3893926dcaff0dbe4007,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765010347649993205,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-lldj4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7a40665-e9ca-4bf4-ad57-e26599416021,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4316e090e85b785ac1a30511a8640b2039dbd5e618ac82080e49866d7c4fdef,PodSandboxId:39e2c02185766c862f19b32f27bc034ea43042e210ae14164585f049a86ea974,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80
f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_EXITED,CreatedAt:1765010346982404280,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qdsjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ef0850f-68ef-4569-a6fc-c9b2e6c3bd92,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3900711086fd088de5f88b00a515b8c326c94cc30cef68f88e82960223a1364e,PodSandboxId:5d648d9253b5a68d688923bfa6defbc74015ebcd65ef2dd6e438ce93ab914942,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d
867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765010346770071486,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c9d42e1-2ff4-4803-a2a9-34cd247885dd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:791e570884927817df9ee435784466b9960e6e5e938587922850173376e7c739,PodSandboxId:6fab5d51d6d4a382a419528019286dc0679866096538176153c4e871df7713ef,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Sta
te:CONTAINER_EXITED,CreatedAt:1765010346741122471,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-171063,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38a26da27013a2977bc1ff9ef01a15b3,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f75b96c3e755415c74a3b993cd8d037c0d5b3eb4da2974d796da3b571b082f4,PodSandboxId:f5d0d39fa3ec09a2372d4db68a4acdf0f2779d019154f629498a5c2e5670c654,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_EXITED,CreatedAt:1765010314257298336,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-171063,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60e56362ea97b0aab7118e6922366fff,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c4e5ef4b6130aeeeb65a6c1c274dcb4f3cdeca32dab840b9ddeb3e95258fa62,PodSandboxId:9657685460ba982192a9331437e654d36ae22b0c27bc3c058a95cb6c608f372f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&Image
Spec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_EXITED,CreatedAt:1765010314182963130,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-171063,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03410f98f85a94c0d1b042fa28219496,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6e6a5db5-31ea-4efb-9bf5-2362b5a6638e name=/runtime.v1.RuntimeService/Li
stContainers
	Dec 06 08:46:06 functional-171063 crio[5491]: time="2025-12-06 08:46:06.403466005Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=11a99f99-e85c-4a77-99fb-927e2be8358f name=/runtime.v1.RuntimeService/Version
	Dec 06 08:46:06 functional-171063 crio[5491]: time="2025-12-06 08:46:06.403544480Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=11a99f99-e85c-4a77-99fb-927e2be8358f name=/runtime.v1.RuntimeService/Version
	Dec 06 08:46:06 functional-171063 crio[5491]: time="2025-12-06 08:46:06.405292738Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7ebceb3a-b0aa-41d5-9ea0-89aac55677a8 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 08:46:06 functional-171063 crio[5491]: time="2025-12-06 08:46:06.406095387Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765010766406071529,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:239993,},InodesUsed:&UInt64Value{Value:111,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7ebceb3a-b0aa-41d5-9ea0-89aac55677a8 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 08:46:06 functional-171063 crio[5491]: time="2025-12-06 08:46:06.407247044Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c0a728f3-09e6-4a26-8801-345d5460936c name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 08:46:06 functional-171063 crio[5491]: time="2025-12-06 08:46:06.407301799Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c0a728f3-09e6-4a26-8801-345d5460936c name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 08:46:06 functional-171063 crio[5491]: time="2025-12-06 08:46:06.407731746Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:253d022ae1c232815c306567d52913d516861e5ef897f8684bae6b817868dced,PodSandboxId:b5f08f5c061748adaf4a8ba770d20abfca0e0b78d98fa8ae7d23a03566d267db,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1765010435533902802,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-77bf4d6c4c-k2jrd,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 344f4941-6c6e-4084-8282-4e9c23bb7670,},Annotations:map[string]string{io.kub
ernetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0770b78c153fffcc576a5fe0b8bb64ffc1459c2b1f4e512493bda405b402974c,PodSandboxId:36ec8310648f2c9f332a0d8cd924ae35e64bc296f8cac42d467d80a0ede6a048,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1765010432413702148,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-855c9754f9-26dbj,io.kubernete
s.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: b866af3b-60ec-46a5-9cc0-6a15f73b8acd,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2c53e3233352048354f73e24ebd59b38cec04468d76349cce715e88511004c6,PodSandboxId:bb4aa3da1f9bec2092d290f1c69a5bb14c36dabcb9847ca8f250fa60ab3332bf,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1765010424763523521,Labels:map[string]string{io.
kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0e7b1705-1df4-4cca-b820-36c9e4bb31ff,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fde0cbd25fd8a15bc188967ff383714271dd892a480bc1e97dc73c91b0680201,PodSandboxId:c155d30d2980941f6df7485d7973b8c3b3d307dbae61837f2590982d02a98bc5,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1765010414347099975,Labels:map[string]string{io.
kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-75c85bcc94-9xzqr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cb5c6960-1721-4e04-828a-4979165b6abd,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25c52dd7dff23b53745badda126d05eda8dfd3d1f5c7fa964fd5d33ac44e47e6,PodSandboxId:149de353a5d6365ba43509d94099cb473a7240db5e4ee54509f8bfda6731cac9,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1765010412689013358,Labels:map[stri
ng]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-7d85dfc575-s4575,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 12ae6735-61b5-45e5-a064-29a0f6ce23e2,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a125773a38c1b21a8415b6b6f5b849b4c05b575694224d20eef3d429a126b01c,PodSandboxId:ecfe8dd4e5c8fcb68e51cc316c92d5f440104ddbc3922c0e68c5cfd7ce0cef7d,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1765010410268506150,Labels
:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-5bb876957f-ql5z4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8f961666-5410-46a1-bffc-ce0456227f36,},Annotations:map[string]string{io.kubernetes.container.hash: a60d665,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db1269cded92c4985645f17bdbda5630ad85da58774d7344c7ce58720128eab0,PodSandboxId:39e2c02185766c862f19b32f27bc034ea43042e210ae14164585f049a86ea974,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45
,State:CONTAINER_RUNNING,CreatedAt:1765010373550251166,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qdsjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ef0850f-68ef-4569-a6fc-c9b2e6c3bd92,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d879e290547c78634c4471b9506509d7d40e3bdc5d5d2ba8df972f59b6df16ac,PodSandboxId:f49b21f47fad3895e34851a2c57fd60ce3b722d5afcb3893926dcaff0dbe4007,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1
765010373567247269,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-lldj4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7a40665-e9ca-4bf4-ad57-e26599416021,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b37b58f4653d8dc77695934badaa6b648719e4fdb9a69958c7b78cbb644c7b39,PodSandboxId:1d02bc21a67c00a2f7e40feb8630c5b221bcc810fa
0b7a4259ccc44bcd6ba7f7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765010370025899709,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-171063,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19a0bd67be22aa24a6c49eae9452e26a,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fcafa881261
6185bf018ee4a9fa2accb844e99fff65ffb1ef5de42dbda51fd8,PodSandboxId:6fab5d51d6d4a382a419528019286dc0679866096538176153c4e871df7713ef,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765010369899856419,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-171063,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38a26da27013a2977bc1ff9ef01a15b3,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fi
le,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94460fb20af90dec7098bde575ad2c14afca516125b9a4ccb8755b193c945d61,PodSandboxId:5d648d9253b5a68d688923bfa6defbc74015ebcd65ef2dd6e438ce93ab914942,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765010361767977445,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c9d42e1-2ff4-4803-a2a9-34cd247885dd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubern
etes.pod.terminationGracePeriod: 30,},},&Container{Id:8e69eb2b9b7161c71462f27c76e7839c4d28eb4067326acf7e0e0c5f8e9ce50f,PodSandboxId:85a9a26f766509b115128fbde9776ddc9daeb62ea42dccc14d1a5d066130927a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765010353613131669,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-171063,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03410f98f85a94c0d1b042fa28219496,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4eb1b9d910706caea8205c03189c84984c1e3152d9c307d5820920121c52af0c,PodSandboxId:6d51dca09db23fc73e68a0d0ff5a3a646fcd56a0f3c50df69d114319c57d097c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765010348624848349,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-171063,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60e56362ea97b0aab7118e6922366fff,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe
-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a9bdd8d69e005a7aeff748396017568be772b9932067bfd1507a779f07ce049,PodSandboxId:f49b21f47fad3895e34851a2c57fd60ce3b722d5afcb3893926dcaff0dbe4007,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765010347649993205,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-lldj4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7a40665-e9ca-4bf4-ad57-e26599416021,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4316e090e85b785ac1a30511a8640b2039dbd5e618ac82080e49866d7c4fdef,PodSandboxId:39e2c02185766c862f19b32f27bc034ea43042e210ae14164585f049a86ea974,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80
f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_EXITED,CreatedAt:1765010346982404280,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qdsjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ef0850f-68ef-4569-a6fc-c9b2e6c3bd92,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3900711086fd088de5f88b00a515b8c326c94cc30cef68f88e82960223a1364e,PodSandboxId:5d648d9253b5a68d688923bfa6defbc74015ebcd65ef2dd6e438ce93ab914942,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d
867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765010346770071486,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c9d42e1-2ff4-4803-a2a9-34cd247885dd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:791e570884927817df9ee435784466b9960e6e5e938587922850173376e7c739,PodSandboxId:6fab5d51d6d4a382a419528019286dc0679866096538176153c4e871df7713ef,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Sta
te:CONTAINER_EXITED,CreatedAt:1765010346741122471,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-171063,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38a26da27013a2977bc1ff9ef01a15b3,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f75b96c3e755415c74a3b993cd8d037c0d5b3eb4da2974d796da3b571b082f4,PodSandboxId:f5d0d39fa3ec09a2372d4db68a4acdf0f2779d019154f629498a5c2e5670c654,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_EXITED,CreatedAt:1765010314257298336,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-171063,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60e56362ea97b0aab7118e6922366fff,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c4e5ef4b6130aeeeb65a6c1c274dcb4f3cdeca32dab840b9ddeb3e95258fa62,PodSandboxId:9657685460ba982192a9331437e654d36ae22b0c27bc3c058a95cb6c608f372f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&Image
Spec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_EXITED,CreatedAt:1765010314182963130,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-171063,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03410f98f85a94c0d1b042fa28219496,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c0a728f3-09e6-4a26-8801-345d5460936c name=/runtime.v1.RuntimeService/Li
stContainers
	Dec 06 08:46:06 functional-171063 crio[5491]: time="2025-12-06 08:46:06.446125253Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=53de90eb-835b-4fe0-80a8-bf7d87a7a0a6 name=/runtime.v1.RuntimeService/Version
	Dec 06 08:46:06 functional-171063 crio[5491]: time="2025-12-06 08:46:06.446335198Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=53de90eb-835b-4fe0-80a8-bf7d87a7a0a6 name=/runtime.v1.RuntimeService/Version
	Dec 06 08:46:06 functional-171063 crio[5491]: time="2025-12-06 08:46:06.447867103Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=35e17213-bfea-453a-9180-9b0876e96810 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 08:46:06 functional-171063 crio[5491]: time="2025-12-06 08:46:06.448726558Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765010766448565482,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:239993,},InodesUsed:&UInt64Value{Value:111,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=35e17213-bfea-453a-9180-9b0876e96810 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 08:46:06 functional-171063 crio[5491]: time="2025-12-06 08:46:06.449712397Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ba341245-cb01-4e98-8309-0ce0af906bc8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 08:46:06 functional-171063 crio[5491]: time="2025-12-06 08:46:06.449769808Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ba341245-cb01-4e98-8309-0ce0af906bc8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 08:46:06 functional-171063 crio[5491]: time="2025-12-06 08:46:06.450264074Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:253d022ae1c232815c306567d52913d516861e5ef897f8684bae6b817868dced,PodSandboxId:b5f08f5c061748adaf4a8ba770d20abfca0e0b78d98fa8ae7d23a03566d267db,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1765010435533902802,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-77bf4d6c4c-k2jrd,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 344f4941-6c6e-4084-8282-4e9c23bb7670,},Annotations:map[string]string{io.kub
ernetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0770b78c153fffcc576a5fe0b8bb64ffc1459c2b1f4e512493bda405b402974c,PodSandboxId:36ec8310648f2c9f332a0d8cd924ae35e64bc296f8cac42d467d80a0ede6a048,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1765010432413702148,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-855c9754f9-26dbj,io.kubernete
s.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: b866af3b-60ec-46a5-9cc0-6a15f73b8acd,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2c53e3233352048354f73e24ebd59b38cec04468d76349cce715e88511004c6,PodSandboxId:bb4aa3da1f9bec2092d290f1c69a5bb14c36dabcb9847ca8f250fa60ab3332bf,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1765010424763523521,Labels:map[string]string{io.
kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0e7b1705-1df4-4cca-b820-36c9e4bb31ff,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fde0cbd25fd8a15bc188967ff383714271dd892a480bc1e97dc73c91b0680201,PodSandboxId:c155d30d2980941f6df7485d7973b8c3b3d307dbae61837f2590982d02a98bc5,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1765010414347099975,Labels:map[string]string{io.
kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-75c85bcc94-9xzqr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cb5c6960-1721-4e04-828a-4979165b6abd,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25c52dd7dff23b53745badda126d05eda8dfd3d1f5c7fa964fd5d33ac44e47e6,PodSandboxId:149de353a5d6365ba43509d94099cb473a7240db5e4ee54509f8bfda6731cac9,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1765010412689013358,Labels:map[stri
ng]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-7d85dfc575-s4575,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 12ae6735-61b5-45e5-a064-29a0f6ce23e2,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a125773a38c1b21a8415b6b6f5b849b4c05b575694224d20eef3d429a126b01c,PodSandboxId:ecfe8dd4e5c8fcb68e51cc316c92d5f440104ddbc3922c0e68c5cfd7ce0cef7d,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1765010410268506150,Labels
:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-5bb876957f-ql5z4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8f961666-5410-46a1-bffc-ce0456227f36,},Annotations:map[string]string{io.kubernetes.container.hash: a60d665,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db1269cded92c4985645f17bdbda5630ad85da58774d7344c7ce58720128eab0,PodSandboxId:39e2c02185766c862f19b32f27bc034ea43042e210ae14164585f049a86ea974,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45
,State:CONTAINER_RUNNING,CreatedAt:1765010373550251166,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qdsjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ef0850f-68ef-4569-a6fc-c9b2e6c3bd92,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d879e290547c78634c4471b9506509d7d40e3bdc5d5d2ba8df972f59b6df16ac,PodSandboxId:f49b21f47fad3895e34851a2c57fd60ce3b722d5afcb3893926dcaff0dbe4007,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1
765010373567247269,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-lldj4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7a40665-e9ca-4bf4-ad57-e26599416021,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b37b58f4653d8dc77695934badaa6b648719e4fdb9a69958c7b78cbb644c7b39,PodSandboxId:1d02bc21a67c00a2f7e40feb8630c5b221bcc810fa
0b7a4259ccc44bcd6ba7f7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765010370025899709,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-171063,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19a0bd67be22aa24a6c49eae9452e26a,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fcafa881261
6185bf018ee4a9fa2accb844e99fff65ffb1ef5de42dbda51fd8,PodSandboxId:6fab5d51d6d4a382a419528019286dc0679866096538176153c4e871df7713ef,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765010369899856419,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-171063,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38a26da27013a2977bc1ff9ef01a15b3,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fi
le,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94460fb20af90dec7098bde575ad2c14afca516125b9a4ccb8755b193c945d61,PodSandboxId:5d648d9253b5a68d688923bfa6defbc74015ebcd65ef2dd6e438ce93ab914942,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765010361767977445,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c9d42e1-2ff4-4803-a2a9-34cd247885dd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubern
etes.pod.terminationGracePeriod: 30,},},&Container{Id:8e69eb2b9b7161c71462f27c76e7839c4d28eb4067326acf7e0e0c5f8e9ce50f,PodSandboxId:85a9a26f766509b115128fbde9776ddc9daeb62ea42dccc14d1a5d066130927a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765010353613131669,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-171063,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03410f98f85a94c0d1b042fa28219496,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4eb1b9d910706caea8205c03189c84984c1e3152d9c307d5820920121c52af0c,PodSandboxId:6d51dca09db23fc73e68a0d0ff5a3a646fcd56a0f3c50df69d114319c57d097c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765010348624848349,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-171063,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60e56362ea97b0aab7118e6922366fff,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe
-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a9bdd8d69e005a7aeff748396017568be772b9932067bfd1507a779f07ce049,PodSandboxId:f49b21f47fad3895e34851a2c57fd60ce3b722d5afcb3893926dcaff0dbe4007,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765010347649993205,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-lldj4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7a40665-e9ca-4bf4-ad57-e26599416021,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4316e090e85b785ac1a30511a8640b2039dbd5e618ac82080e49866d7c4fdef,PodSandboxId:39e2c02185766c862f19b32f27bc034ea43042e210ae14164585f049a86ea974,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80
f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_EXITED,CreatedAt:1765010346982404280,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qdsjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ef0850f-68ef-4569-a6fc-c9b2e6c3bd92,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3900711086fd088de5f88b00a515b8c326c94cc30cef68f88e82960223a1364e,PodSandboxId:5d648d9253b5a68d688923bfa6defbc74015ebcd65ef2dd6e438ce93ab914942,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d
867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765010346770071486,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c9d42e1-2ff4-4803-a2a9-34cd247885dd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:791e570884927817df9ee435784466b9960e6e5e938587922850173376e7c739,PodSandboxId:6fab5d51d6d4a382a419528019286dc0679866096538176153c4e871df7713ef,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Sta
te:CONTAINER_EXITED,CreatedAt:1765010346741122471,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-171063,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 38a26da27013a2977bc1ff9ef01a15b3,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f75b96c3e755415c74a3b993cd8d037c0d5b3eb4da2974d796da3b571b082f4,PodSandboxId:f5d0d39fa3ec09a2372d4db68a4acdf0f2779d019154f629498a5c2e5670c654,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_EXITED,CreatedAt:1765010314257298336,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-171063,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60e56362ea97b0aab7118e6922366fff,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c4e5ef4b6130aeeeb65a6c1c274dcb4f3cdeca32dab840b9ddeb3e95258fa62,PodSandboxId:9657685460ba982192a9331437e654d36ae22b0c27bc3c058a95cb6c608f372f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&Image
Spec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_EXITED,CreatedAt:1765010314182963130,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-171063,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03410f98f85a94c0d1b042fa28219496,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ba341245-cb01-4e98-8309-0ce0af906bc8 name=/runtime.v1.RuntimeService/Li
stContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	253d022ae1c23       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   5 minutes ago       Running             dashboard-metrics-scraper   0                   b5f08f5c06174       dashboard-metrics-scraper-77bf4d6c4c-k2jrd   kubernetes-dashboard
	0770b78c153ff       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93         5 minutes ago       Running             kubernetes-dashboard        0                   36ec8310648f2       kubernetes-dashboard-855c9754f9-26dbj        kubernetes-dashboard
	f2c53e3233352       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e              5 minutes ago       Exited              mount-munger                0                   bb4aa3da1f9be       busybox-mount                                default
	fde0cbd25fd8a       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6            5 minutes ago       Running             echo-server                 0                   c155d30d29809       hello-node-75c85bcc94-9xzqr                  default
	25c52dd7dff23       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6            5 minutes ago       Running             echo-server                 0                   149de353a5d63       hello-node-connect-7d85dfc575-s4575          default
	a125773a38c1b       docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb                  5 minutes ago       Running             mysql                       0                   ecfe8dd4e5c8f       mysql-5bb876957f-ql5z4                       default
	d879e290547c7       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 6 minutes ago       Running             coredns                     3                   f49b21f47fad3       coredns-66bc5c9577-lldj4                     kube-system
	db1269cded92c       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                                 6 minutes ago       Running             kube-proxy                  3                   39e2c02185766       kube-proxy-qdsjp                             kube-system
	b37b58f4653d8       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                                 6 minutes ago       Running             kube-apiserver              0                   1d02bc21a67c0       kube-apiserver-functional-171063             kube-system
	0fcafa8812616       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                 6 minutes ago       Running             etcd                        3                   6fab5d51d6d4a       etcd-functional-171063                       kube-system
	94460fb20af90       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 6 minutes ago       Running             storage-provisioner         3                   5d648d9253b5a       storage-provisioner                          kube-system
	8e69eb2b9b716       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                                 6 minutes ago       Running             kube-scheduler              3                   85a9a26f76650       kube-scheduler-functional-171063             kube-system
	4eb1b9d910706       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                                 6 minutes ago       Running             kube-controller-manager     3                   6d51dca09db23       kube-controller-manager-functional-171063    kube-system
	4a9bdd8d69e00       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 6 minutes ago       Exited              coredns                     2                   f49b21f47fad3       coredns-66bc5c9577-lldj4                     kube-system
	e4316e090e85b       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                                 6 minutes ago       Exited              kube-proxy                  2                   39e2c02185766       kube-proxy-qdsjp                             kube-system
	3900711086fd0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 6 minutes ago       Exited              storage-provisioner         2                   5d648d9253b5a       storage-provisioner                          kube-system
	791e570884927       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                 6 minutes ago       Exited              etcd                        2                   6fab5d51d6d4a       etcd-functional-171063                       kube-system
	6f75b96c3e755       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                                 7 minutes ago       Exited              kube-controller-manager     2                   f5d0d39fa3ec0       kube-controller-manager-functional-171063    kube-system
	7c4e5ef4b6130       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                                 7 minutes ago       Exited              kube-scheduler              2                   9657685460ba9       kube-scheduler-functional-171063             kube-system
	
	
	==> coredns [4a9bdd8d69e005a7aeff748396017568be772b9932067bfd1507a779f07ce049] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:33799 - 33855 "HINFO IN 167207184803065883.8268697584882067276. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.029388252s
	
	
	==> coredns [d879e290547c78634c4471b9506509d7d40e3bdc5d5d2ba8df972f59b6df16ac] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50255 - 19938 "HINFO IN 3189925940820556243.5987650056281311949. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.059342898s
	
	
	==> describe nodes <==
	Name:               functional-171063
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-171063
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9c863e42b877bb840aec81dfcdcbf173a0ac5fb9
	                    minikube.k8s.io/name=functional-171063
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_06T08_37_28_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 06 Dec 2025 08:37:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-171063
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 06 Dec 2025 08:46:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 06 Dec 2025 08:44:28 +0000   Sat, 06 Dec 2025 08:37:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 06 Dec 2025 08:44:28 +0000   Sat, 06 Dec 2025 08:37:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 06 Dec 2025 08:44:28 +0000   Sat, 06 Dec 2025 08:37:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 06 Dec 2025 08:44:28 +0000   Sat, 06 Dec 2025 08:37:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.67
	  Hostname:    functional-171063
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 d577e17e9f714060893f5169ecbcdd4b
	  System UUID:                d577e17e-9f71-4060-893f-5169ecbcdd4b
	  Boot ID:                    7e705805-b776-4a44-b536-9b3ce796888f
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-9xzqr                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m54s
	  default                     hello-node-connect-7d85dfc575-s4575           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m8s
	  default                     mysql-5bb876957f-ql5z4                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    6m10s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m1s
	  kube-system                 coredns-66bc5c9577-lldj4                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     8m34s
	  kube-system                 etcd-functional-171063                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         8m39s
	  kube-system                 kube-apiserver-functional-171063              250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m34s
	  kube-system                 kube-controller-manager-functional-171063     200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m39s
	  kube-system                 kube-proxy-qdsjp                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m34s
	  kube-system                 kube-scheduler-functional-171063              100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m39s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m32s
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-k2jrd    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m44s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-26dbj         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m32s                  kube-proxy       
	  Normal  Starting                 6m32s                  kube-proxy       
	  Normal  Starting                 6m55s                  kube-proxy       
	  Normal  Starting                 7m41s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  8m39s                  kubelet          Node functional-171063 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m39s                  kubelet          Node functional-171063 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m39s                  kubelet          Node functional-171063 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m39s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 8m39s                  kubelet          Starting kubelet.
	  Normal  NodeReady                8m38s                  kubelet          Node functional-171063 status is now: NodeReady
	  Normal  RegisteredNode           8m35s                  node-controller  Node functional-171063 event: Registered Node functional-171063 in Controller
	  Normal  NodeHasNoDiskPressure    7m33s (x8 over 7m33s)  kubelet          Node functional-171063 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  7m33s (x8 over 7m33s)  kubelet          Node functional-171063 status is now: NodeHasSufficientMemory
	  Normal  Starting                 7m33s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     7m33s (x7 over 7m33s)  kubelet          Node functional-171063 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m33s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7m26s                  node-controller  Node functional-171063 event: Registered Node functional-171063 in Controller
	  Normal  RegisteredNode           6m53s                  node-controller  Node functional-171063 event: Registered Node functional-171063 in Controller
	  Normal  Starting                 6m40s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m40s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m35s (x5 over 6m40s)  kubelet          Node functional-171063 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m35s (x5 over 6m40s)  kubelet          Node functional-171063 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m35s (x5 over 6m40s)  kubelet          Node functional-171063 status is now: NodeHasSufficientPID
	
	
	==> dmesg <==
	[Dec 6 08:37] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.179348] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000019] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.083219] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.097660] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.132733] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.283821] kauditd_printk_skb: 18 callbacks suppressed
	[ +10.958297] kauditd_printk_skb: 255 callbacks suppressed
	[Dec 6 08:38] kauditd_printk_skb: 38 callbacks suppressed
	[  +0.127767] kauditd_printk_skb: 349 callbacks suppressed
	[  +3.562605] kauditd_printk_skb: 63 callbacks suppressed
	[  +7.068925] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 6 08:39] kauditd_printk_skb: 12 callbacks suppressed
	[  +4.179065] kauditd_printk_skb: 320 callbacks suppressed
	[  +4.174748] kauditd_printk_skb: 17 callbacks suppressed
	[  +0.127468] kauditd_printk_skb: 29 callbacks suppressed
	[  +7.219026] kauditd_printk_skb: 70 callbacks suppressed
	[  +8.299547] kauditd_printk_skb: 26 callbacks suppressed
	[  +1.596113] kauditd_printk_skb: 91 callbacks suppressed
	[Dec 6 08:40] kauditd_printk_skb: 86 callbacks suppressed
	[  +0.000451] kauditd_printk_skb: 132 callbacks suppressed
	[  +0.618287] crun[9895]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	[  +3.475576] kauditd_printk_skb: 125 callbacks suppressed
	[  +5.659506] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [0fcafa8812616185bf018ee4a9fa2accb844e99fff65ffb1ef5de42dbda51fd8] <==
	{"level":"info","ts":"2025-12-06T08:40:05.131405Z","caller":"traceutil/trace.go:172","msg":"trace[1776678951] transaction","detail":"{read_only:false; response_revision:782; number_of_response:1; }","duration":"258.407096ms","start":"2025-12-06T08:40:04.872986Z","end":"2025-12-06T08:40:05.131393Z","steps":["trace[1776678951] 'process raft request'  (duration: 257.166766ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-06T08:40:07.411067Z","caller":"traceutil/trace.go:172","msg":"trace[1533684329] linearizableReadLoop","detail":"{readStateIndex:867; appliedIndex:867; }","duration":"127.231485ms","start":"2025-12-06T08:40:07.283819Z","end":"2025-12-06T08:40:07.411050Z","steps":["trace[1533684329] 'read index received'  (duration: 127.227253ms)","trace[1533684329] 'applied index is now lower than readState.Index'  (duration: 3.539µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-06T08:40:07.411149Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"127.315805ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-06T08:40:07.411166Z","caller":"traceutil/trace.go:172","msg":"trace[2104107111] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:786; }","duration":"127.34686ms","start":"2025-12-06T08:40:07.283814Z","end":"2025-12-06T08:40:07.411161Z","steps":["trace[2104107111] 'agreement among raft nodes before linearized reading'  (duration: 127.290814ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-06T08:40:07.411457Z","caller":"traceutil/trace.go:172","msg":"trace[466059422] transaction","detail":"{read_only:false; response_revision:787; number_of_response:1; }","duration":"265.48479ms","start":"2025-12-06T08:40:07.145964Z","end":"2025-12-06T08:40:07.411449Z","steps":["trace[466059422] 'process raft request'  (duration: 265.379184ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-06T08:40:08.910383Z","caller":"traceutil/trace.go:172","msg":"trace[767680371] linearizableReadLoop","detail":"{readStateIndex:868; appliedIndex:868; }","duration":"267.655998ms","start":"2025-12-06T08:40:08.642709Z","end":"2025-12-06T08:40:08.910365Z","steps":["trace[767680371] 'read index received'  (duration: 267.645415ms)","trace[767680371] 'applied index is now lower than readState.Index'  (duration: 10.057µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-06T08:40:08.910480Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"267.758035ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-06T08:40:08.910496Z","caller":"traceutil/trace.go:172","msg":"trace[1972323302] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:787; }","duration":"267.786663ms","start":"2025-12-06T08:40:08.642705Z","end":"2025-12-06T08:40:08.910492Z","steps":["trace[1972323302] 'agreement among raft nodes before linearized reading'  (duration: 267.72972ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-06T08:40:08.914039Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"194.571628ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-06T08:40:08.914079Z","caller":"traceutil/trace.go:172","msg":"trace[934212891] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:787; }","duration":"194.621077ms","start":"2025-12-06T08:40:08.719450Z","end":"2025-12-06T08:40:08.914071Z","steps":["trace[934212891] 'agreement among raft nodes before linearized reading'  (duration: 194.553024ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-06T08:40:15.593146Z","caller":"traceutil/trace.go:172","msg":"trace[1358639512] transaction","detail":"{read_only:false; response_revision:837; number_of_response:1; }","duration":"115.329134ms","start":"2025-12-06T08:40:15.477802Z","end":"2025-12-06T08:40:15.593131Z","steps":["trace[1358639512] 'process raft request'  (duration: 115.148889ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-06T08:40:17.773263Z","caller":"traceutil/trace.go:172","msg":"trace[231444208] linearizableReadLoop","detail":"{readStateIndex:920; appliedIndex:920; }","duration":"138.135655ms","start":"2025-12-06T08:40:17.635111Z","end":"2025-12-06T08:40:17.773247Z","steps":["trace[231444208] 'read index received'  (duration: 138.131763ms)","trace[231444208] 'applied index is now lower than readState.Index'  (duration: 3.386µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-06T08:40:17.773423Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"138.294829ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/mysql-5bb876957f-ql5z4\" limit:1 ","response":"range_response_count:1 size:3502"}
	{"level":"info","ts":"2025-12-06T08:40:17.773470Z","caller":"traceutil/trace.go:172","msg":"trace[2096927726] range","detail":"{range_begin:/registry/pods/default/mysql-5bb876957f-ql5z4; range_end:; response_count:1; response_revision:837; }","duration":"138.339549ms","start":"2025-12-06T08:40:17.635107Z","end":"2025-12-06T08:40:17.773446Z","steps":["trace[2096927726] 'agreement among raft nodes before linearized reading'  (duration: 138.221645ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-06T08:40:17.774392Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"127.544038ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-06T08:40:17.774423Z","caller":"traceutil/trace.go:172","msg":"trace[1868746707] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:838; }","duration":"127.581993ms","start":"2025-12-06T08:40:17.646834Z","end":"2025-12-06T08:40:17.774416Z","steps":["trace[1868746707] 'agreement among raft nodes before linearized reading'  (duration: 127.52656ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-06T08:40:17.775377Z","caller":"traceutil/trace.go:172","msg":"trace[997568160] transaction","detail":"{read_only:false; response_revision:838; number_of_response:1; }","duration":"172.440454ms","start":"2025-12-06T08:40:17.602926Z","end":"2025-12-06T08:40:17.775366Z","steps":["trace[997568160] 'process raft request'  (duration: 170.78117ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-06T08:40:32.254185Z","caller":"traceutil/trace.go:172","msg":"trace[606303822] linearizableReadLoop","detail":"{readStateIndex:1004; appliedIndex:1004; }","duration":"241.429509ms","start":"2025-12-06T08:40:32.012713Z","end":"2025-12-06T08:40:32.254143Z","steps":["trace[606303822] 'read index received'  (duration: 241.424956ms)","trace[606303822] 'applied index is now lower than readState.Index'  (duration: 4.103µs)"],"step_count":2}
	{"level":"info","ts":"2025-12-06T08:40:32.254366Z","caller":"traceutil/trace.go:172","msg":"trace[2094345409] transaction","detail":"{read_only:false; response_revision:919; number_of_response:1; }","duration":"367.680909ms","start":"2025-12-06T08:40:31.886672Z","end":"2025-12-06T08:40:32.254352Z","steps":["trace[2094345409] 'process raft request'  (duration: 367.582435ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-06T08:40:32.254807Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-06T08:40:31.886652Z","time spent":"367.734423ms","remote":"127.0.0.1:34590","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:918 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2025-12-06T08:40:32.254978Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"242.264247ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-06T08:40:32.254997Z","caller":"traceutil/trace.go:172","msg":"trace[1002888189] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:919; }","duration":"242.283605ms","start":"2025-12-06T08:40:32.012707Z","end":"2025-12-06T08:40:32.254991Z","steps":["trace[1002888189] 'agreement among raft nodes before linearized reading'  (duration: 242.247007ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-06T08:40:32.255330Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"126.0557ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/poddisruptionbudgets\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-06T08:40:32.255351Z","caller":"traceutil/trace.go:172","msg":"trace[1455255475] range","detail":"{range_begin:/registry/poddisruptionbudgets; range_end:; response_count:0; response_revision:919; }","duration":"126.08019ms","start":"2025-12-06T08:40:32.129266Z","end":"2025-12-06T08:40:32.255346Z","steps":["trace[1455255475] 'agreement among raft nodes before linearized reading'  (duration: 126.041261ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-06T08:41:02.623009Z","caller":"traceutil/trace.go:172","msg":"trace[690844397] transaction","detail":"{read_only:false; response_revision:962; number_of_response:1; }","duration":"174.554784ms","start":"2025-12-06T08:41:02.448440Z","end":"2025-12-06T08:41:02.622995Z","steps":["trace[690844397] 'process raft request'  (duration: 174.432736ms)"],"step_count":1}
	
	
	==> etcd [791e570884927817df9ee435784466b9960e6e5e938587922850173376e7c739] <==
	{"level":"warn","ts":"2025-12-06T08:39:09.870672Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T08:39:09.884233Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T08:39:09.887368Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T08:39:09.901959Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T08:39:09.912022Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T08:39:09.921739Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T08:39:09.999997Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49328","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-06T08:39:18.017479Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-06T08:39:18.017539Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-171063","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.67:2380"],"advertise-client-urls":["https://192.168.39.67:2379"]}
	{"level":"error","ts":"2025-12-06T08:39:18.017771Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-06T08:39:25.023492Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-06T08:39:25.027744Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-06T08:39:25.027947Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ce564ad586a3115","current-leader-member-id":"ce564ad586a3115"}
	{"level":"warn","ts":"2025-12-06T08:39:25.027749Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-06T08:39:25.028014Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-06T08:39:25.028024Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-06T08:39:25.027918Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.67:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-06T08:39:25.028036Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.67:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-06T08:39:25.028041Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.67:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-06T08:39:25.028056Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-06T08:39:25.028062Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-12-06T08:39:25.031889Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.67:2380"}
	{"level":"error","ts":"2025-12-06T08:39:25.031938Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.67:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-06T08:39:25.031957Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.67:2380"}
	{"level":"info","ts":"2025-12-06T08:39:25.031963Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-171063","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.67:2380"],"advertise-client-urls":["https://192.168.39.67:2379"]}
	
	
	==> kernel <==
	 08:46:06 up 9 min,  0 users,  load average: 0.22, 0.41, 0.25
	Linux functional-171063 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Dec  4 13:30:13 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [b37b58f4653d8dc77695934badaa6b648719e4fdb9a69958c7b78cbb644c7b39] <==
	I1206 08:39:32.364005       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1206 08:39:32.364078       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1206 08:39:32.364097       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1206 08:39:32.383349       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1206 08:39:32.391086       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1206 08:39:32.394783       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1206 08:39:32.409983       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1206 08:39:33.049128       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1206 08:39:33.287107       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1206 08:39:33.959420       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1206 08:39:34.036067       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1206 08:39:34.063898       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1206 08:39:34.076184       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1206 08:39:42.518457       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1206 08:39:42.519549       1 controller.go:667] quota admission added evaluator for: endpoints
	I1206 08:39:42.522884       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1206 08:39:51.216684       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.110.199.8"}
	I1206 08:39:56.527752       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.99.95.54"}
	I1206 08:39:58.713265       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.98.137.10"}
	I1206 08:40:12.849854       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.103.73.16"}
	E1206 08:40:17.863384       1 conn.go:339] Error on socket receive: read tcp 192.168.39.67:8441->192.168.39.1:60524: use of closed network connection
	E1206 08:40:19.951799       1 conn.go:339] Error on socket receive: read tcp 192.168.39.67:8441->192.168.39.1:60584: use of closed network connection
	I1206 08:40:21.977772       1 controller.go:667] quota admission added evaluator for: namespaces
	I1206 08:40:22.373795       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.133.215"}
	I1206 08:40:22.443469       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.15.199"}
	
	
	==> kube-controller-manager [4eb1b9d910706caea8205c03189c84984c1e3152d9c307d5820920121c52af0c] <==
	E1206 08:39:32.270382       1 reflector.go:205] "Failed to watch" err="poddisruptionbudgets.policy is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1206 08:39:32.270398       1 reflector.go:205] "Failed to watch" err="validatingwebhookconfigurations.admissionregistration.k8s.io is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"validatingwebhookconfigurations\" in API group \"admissionregistration.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ValidatingWebhookConfiguration"
	E1206 08:39:32.270442       1 reflector.go:205] "Failed to watch" err="pods is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1206 08:39:32.270457       1 reflector.go:205] "Failed to watch" err="servicecidrs.networking.k8s.io is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"servicecidrs\" in API group \"networking.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1206 08:39:32.270471       1 reflector.go:205] "Failed to watch" err="resourceslices.resource.k8s.io is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1206 08:39:32.270485       1 reflector.go:205] "Failed to watch" err="jobs.batch is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"jobs\" in API group \"batch\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Job"
	E1206 08:39:32.270502       1 reflector.go:205] "Failed to watch" err="persistentvolumes is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1206 08:39:32.270522       1 reflector.go:205] "Failed to watch" err="leases.coordination.k8s.io is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"leases\" in API group \"coordination.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Lease"
	E1206 08:39:32.270556       1 reflector.go:205] "Failed to watch" err="validatingadmissionpolicybindings.admissionregistration.k8s.io is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"validatingadmissionpolicybindings\" in API group \"admissionregistration.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ValidatingAdmissionPolicyBinding"
	E1206 08:39:32.270649       1 reflector.go:205] "Failed to watch" err="persistentvolumeclaims is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1206 08:39:32.270695       1 reflector.go:205] "Failed to watch" err="ingressclasses.networking.k8s.io is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"ingressclasses\" in API group \"networking.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.IngressClass"
	E1206 08:39:32.268571       1 reflector.go:205] "Failed to watch" err="statefulsets.apps is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1206 08:39:32.288264       1 reflector.go:205] "Failed to watch" err="rolebindings.rbac.authorization.k8s.io is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"rolebindings\" in API group \"rbac.authorization.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RoleBinding"
	E1206 08:39:32.288330       1 reflector.go:205] "Failed to watch" err="csinodes.storage.k8s.io is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1206 08:39:32.288355       1 reflector.go:205] "Failed to watch" err="daemonsets.apps is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"daemonsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DaemonSet"
	E1206 08:39:32.288406       1 reflector.go:205] "Failed to watch" err="replicasets.apps is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1206 08:39:32.309729       1 reflector.go:205] "Failed to watch" err="networkpolicies.networking.k8s.io is forbidden: User \"system:kube-controller-manager\" cannot watch resource \"networkpolicies\" in API group \"networking.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.NetworkPolicy"
	E1206 08:40:22.144194       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 08:40:22.158888       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 08:40:22.185159       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 08:40:22.188998       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 08:40:22.198224       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 08:40:22.201454       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 08:40:22.216749       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 08:40:22.216864       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [6f75b96c3e755415c74a3b993cd8d037c0d5b3eb4da2974d796da3b571b082f4] <==
	I1206 08:38:40.418113       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1206 08:38:40.420766       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1206 08:38:40.420907       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1206 08:38:40.422875       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1206 08:38:40.424382       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1206 08:38:40.428060       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1206 08:38:40.432105       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1206 08:38:40.433462       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1206 08:38:40.436645       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1206 08:38:40.436864       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1206 08:38:40.436946       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1206 08:38:40.437041       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1206 08:38:40.437061       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1206 08:38:40.439899       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1206 08:38:40.443221       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1206 08:38:40.451533       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1206 08:38:40.458764       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1206 08:38:40.461186       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1206 08:38:40.464670       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1206 08:38:40.468298       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1206 08:38:40.468483       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1206 08:38:40.468350       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1206 08:38:40.470817       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1206 08:38:40.473177       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1206 08:38:40.474518       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	
	
	==> kube-proxy [db1269cded92c4985645f17bdbda5630ad85da58774d7344c7ce58720128eab0] <==
	I1206 08:39:34.017231       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1206 08:39:34.117335       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1206 08:39:34.117388       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.67"]
	E1206 08:39:34.117454       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 08:39:34.190987       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1206 08:39:34.191051       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1206 08:39:34.191073       1 server_linux.go:132] "Using iptables Proxier"
	I1206 08:39:34.203455       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 08:39:34.203779       1 server.go:527] "Version info" version="v1.34.2"
	I1206 08:39:34.203793       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 08:39:34.210527       1 config.go:200] "Starting service config controller"
	I1206 08:39:34.210566       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 08:39:34.210659       1 config.go:106] "Starting endpoint slice config controller"
	I1206 08:39:34.210665       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 08:39:34.210675       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 08:39:34.210678       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 08:39:34.218544       1 config.go:309] "Starting node config controller"
	I1206 08:39:34.218738       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 08:39:34.218767       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 08:39:34.310917       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1206 08:39:34.311013       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1206 08:39:34.311639       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [e4316e090e85b785ac1a30511a8640b2039dbd5e618ac82080e49866d7c4fdef] <==
	E1206 08:39:08.383422       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-171063&limit=500&resourceVersion=0\": dial tcp 192.168.39.67:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1206 08:39:11.304373       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1206 08:39:11.304404       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.67"]
	E1206 08:39:11.304472       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 08:39:11.340841       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1206 08:39:11.340938       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1206 08:39:11.340981       1 server_linux.go:132] "Using iptables Proxier"
	I1206 08:39:11.350874       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 08:39:11.351290       1 server.go:527] "Version info" version="v1.34.2"
	I1206 08:39:11.351375       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 08:39:11.355701       1 config.go:200] "Starting service config controller"
	I1206 08:39:11.355731       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 08:39:11.355746       1 config.go:106] "Starting endpoint slice config controller"
	I1206 08:39:11.355749       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 08:39:11.355769       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 08:39:11.355772       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 08:39:11.356152       1 config.go:309] "Starting node config controller"
	I1206 08:39:11.356185       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 08:39:11.356190       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 08:39:11.455948       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1206 08:39:11.455980       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1206 08:39:11.456051       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [7c4e5ef4b6130aeeeb65a6c1c274dcb4f3cdeca32dab840b9ddeb3e95258fa62] <==
	E1206 08:38:37.005266       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1206 08:38:37.005431       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1206 08:38:37.005474       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1206 08:38:37.005516       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1206 08:38:37.005558       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1206 08:38:37.007773       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1206 08:38:37.007888       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1206 08:38:37.007910       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1206 08:38:37.008055       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1206 08:38:37.008063       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1206 08:38:37.008154       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1206 08:38:37.008161       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1206 08:38:37.008319       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1206 08:38:37.008372       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1206 08:38:37.008411       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1206 08:38:37.008887       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1206 08:38:37.008952       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1206 08:38:37.010191       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	I1206 08:38:38.485481       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 08:38:58.497380       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1206 08:38:58.497442       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1206 08:38:58.497470       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1206 08:38:58.497493       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 08:38:58.503143       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1206 08:38:58.503193       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [8e69eb2b9b7161c71462f27c76e7839c4d28eb4067326acf7e0e0c5f8e9ce50f] <==
	I1206 08:39:14.763640       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 08:39:14.763652       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1206 08:39:14.763657       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1206 08:39:14.763871       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1206 08:39:14.763919       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1206 08:39:14.863833       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1206 08:39:14.863909       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1206 08:39:14.863938       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1206 08:39:32.265007       1 reflector.go:205] "Failed to watch" err="volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1206 08:39:32.265226       1 reflector.go:205] "Failed to watch" err="namespaces is forbidden: User \"system:kube-scheduler\" cannot watch resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1206 08:39:32.265375       1 reflector.go:205] "Failed to watch" err="poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot watch resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1206 08:39:32.265549       1 reflector.go:205] "Failed to watch" err="resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1206 08:39:32.265750       1 reflector.go:205] "Failed to watch" err="csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1206 08:39:32.266275       1 reflector.go:205] "Failed to watch" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1206 08:39:32.267972       1 reflector.go:205] "Failed to watch" err="nodes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1206 08:39:32.268024       1 reflector.go:205] "Failed to watch" err="storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1206 08:39:32.268043       1 reflector.go:205] "Failed to watch" err="persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1206 08:39:32.268062       1 reflector.go:205] "Failed to watch" err="replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1206 08:39:32.268070       1 reflector.go:205] "Failed to watch" err="statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1206 08:39:32.268078       1 reflector.go:205] "Failed to watch" err="resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1206 08:39:32.268086       1 reflector.go:205] "Failed to watch" err="replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1206 08:39:32.268111       1 reflector.go:205] "Failed to watch" err="services is forbidden: User \"system:kube-scheduler\" cannot watch resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1206 08:39:32.268118       1 reflector.go:205] "Failed to watch" err="pods is forbidden: User \"system:kube-scheduler\" cannot watch resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1206 08:39:32.268134       1 reflector.go:205] "Failed to watch" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1206 08:39:32.268159       1 reflector.go:205] "Failed to watch" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	
	
	==> kubelet <==
	Dec 06 08:44:26 functional-171063 kubelet[6732]: E1206 08:44:26.728214    6732 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod60e56362ea97b0aab7118e6922366fff/crio-f5d0d39fa3ec09a2372d4db68a4acdf0f2779d019154f629498a5c2e5670c654: Error finding container f5d0d39fa3ec09a2372d4db68a4acdf0f2779d019154f629498a5c2e5670c654: Status 404 returned error can't find the container with id f5d0d39fa3ec09a2372d4db68a4acdf0f2779d019154f629498a5c2e5670c654
	Dec 06 08:44:26 functional-171063 kubelet[6732]: E1206 08:44:26.846380    6732 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765010666846002330 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:239993} inodes_used:{value:111}}"
	Dec 06 08:44:26 functional-171063 kubelet[6732]: E1206 08:44:26.846421    6732 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765010666846002330 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:239993} inodes_used:{value:111}}"
	Dec 06 08:44:36 functional-171063 kubelet[6732]: E1206 08:44:36.848651    6732 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765010676848119156 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:239993} inodes_used:{value:111}}"
	Dec 06 08:44:36 functional-171063 kubelet[6732]: E1206 08:44:36.848699    6732 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765010676848119156 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:239993} inodes_used:{value:111}}"
	Dec 06 08:44:46 functional-171063 kubelet[6732]: E1206 08:44:46.850881    6732 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765010686850341395 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:239993} inodes_used:{value:111}}"
	Dec 06 08:44:46 functional-171063 kubelet[6732]: E1206 08:44:46.850949    6732 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765010686850341395 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:239993} inodes_used:{value:111}}"
	Dec 06 08:44:56 functional-171063 kubelet[6732]: E1206 08:44:56.853214    6732 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765010696852868608 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:239993} inodes_used:{value:111}}"
	Dec 06 08:44:56 functional-171063 kubelet[6732]: E1206 08:44:56.853297    6732 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765010696852868608 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:239993} inodes_used:{value:111}}"
	Dec 06 08:45:06 functional-171063 kubelet[6732]: E1206 08:45:06.855897    6732 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765010706855406992 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:239993} inodes_used:{value:111}}"
	Dec 06 08:45:06 functional-171063 kubelet[6732]: E1206 08:45:06.855923    6732 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765010706855406992 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:239993} inodes_used:{value:111}}"
	Dec 06 08:45:16 functional-171063 kubelet[6732]: E1206 08:45:16.859320    6732 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765010716858891795 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:239993} inodes_used:{value:111}}"
	Dec 06 08:45:16 functional-171063 kubelet[6732]: E1206 08:45:16.859346    6732 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765010716858891795 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:239993} inodes_used:{value:111}}"
	Dec 06 08:45:26 functional-171063 kubelet[6732]: E1206 08:45:26.727638    6732 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod03410f98f85a94c0d1b042fa28219496/crio-9657685460ba982192a9331437e654d36ae22b0c27bc3c058a95cb6c608f372f: Error finding container 9657685460ba982192a9331437e654d36ae22b0c27bc3c058a95cb6c608f372f: Status 404 returned error can't find the container with id 9657685460ba982192a9331437e654d36ae22b0c27bc3c058a95cb6c608f372f
	Dec 06 08:45:26 functional-171063 kubelet[6732]: E1206 08:45:26.728418    6732 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod60e56362ea97b0aab7118e6922366fff/crio-f5d0d39fa3ec09a2372d4db68a4acdf0f2779d019154f629498a5c2e5670c654: Error finding container f5d0d39fa3ec09a2372d4db68a4acdf0f2779d019154f629498a5c2e5670c654: Status 404 returned error can't find the container with id f5d0d39fa3ec09a2372d4db68a4acdf0f2779d019154f629498a5c2e5670c654
	Dec 06 08:45:26 functional-171063 kubelet[6732]: E1206 08:45:26.861899    6732 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765010726861210036 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:239993} inodes_used:{value:111}}"
	Dec 06 08:45:26 functional-171063 kubelet[6732]: E1206 08:45:26.862010    6732 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765010726861210036 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:239993} inodes_used:{value:111}}"
	Dec 06 08:45:36 functional-171063 kubelet[6732]: E1206 08:45:36.865210    6732 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765010736864487894 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:239993} inodes_used:{value:111}}"
	Dec 06 08:45:36 functional-171063 kubelet[6732]: E1206 08:45:36.865269    6732 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765010736864487894 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:239993} inodes_used:{value:111}}"
	Dec 06 08:45:46 functional-171063 kubelet[6732]: E1206 08:45:46.868988    6732 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765010746867830240 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:239993} inodes_used:{value:111}}"
	Dec 06 08:45:46 functional-171063 kubelet[6732]: E1206 08:45:46.869022    6732 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765010746867830240 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:239993} inodes_used:{value:111}}"
	Dec 06 08:45:56 functional-171063 kubelet[6732]: E1206 08:45:56.872959    6732 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765010756871493823 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:239993} inodes_used:{value:111}}"
	Dec 06 08:45:56 functional-171063 kubelet[6732]: E1206 08:45:56.872983    6732 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765010756871493823 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:239993} inodes_used:{value:111}}"
	Dec 06 08:46:06 functional-171063 kubelet[6732]: E1206 08:46:06.876154    6732 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765010766875723487 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:239993} inodes_used:{value:111}}"
	Dec 06 08:46:06 functional-171063 kubelet[6732]: E1206 08:46:06.876466    6732 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765010766875723487 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:239993} inodes_used:{value:111}}"
	
	
	==> kubernetes-dashboard [0770b78c153fffcc576a5fe0b8bb64ffc1459c2b1f4e512493bda405b402974c] <==
	2025/12/06 08:40:32 Using namespace: kubernetes-dashboard
	2025/12/06 08:40:32 Using in-cluster config to connect to apiserver
	2025/12/06 08:40:32 Using secret token for csrf signing
	2025/12/06 08:40:32 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/06 08:40:32 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/06 08:40:32 Successful initial request to the apiserver, version: v1.34.2
	2025/12/06 08:40:32 Generating JWE encryption key
	2025/12/06 08:40:32 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/06 08:40:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/06 08:40:32 Initializing JWE encryption key from synchronized object
	2025/12/06 08:40:32 Creating in-cluster Sidecar client
	2025/12/06 08:40:32 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/06 08:40:32 Serving insecurely on HTTP port: 9090
	2025/12/06 08:41:02 Successful request to sidecar
	2025/12/06 08:40:32 Starting overwatch
	
	
	==> storage-provisioner [3900711086fd088de5f88b00a515b8c326c94cc30cef68f88e82960223a1364e] <==
	I1206 08:39:07.153966       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1206 08:39:07.158146       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [94460fb20af90dec7098bde575ad2c14afca516125b9a4ccb8755b193c945d61] <==
	W1206 08:45:42.136140       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:45:44.139012       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:45:44.144411       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:45:46.147936       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:45:46.155957       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:45:48.158884       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:45:48.164650       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:45:50.168539       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:45:50.173183       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:45:52.176006       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:45:52.181703       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:45:54.185849       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:45:54.191262       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:45:56.194162       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:45:56.203785       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:45:58.207913       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:45:58.213142       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:46:00.216245       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:46:00.225708       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:46:02.229291       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:46:02.235622       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:46:04.238852       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:46:04.248040       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:46:06.252534       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 08:46:06.259836       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-171063 -n functional-171063
helpers_test.go:269: (dbg) Run:  kubectl --context functional-171063 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount sp-pod
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-171063 describe pod busybox-mount sp-pod
helpers_test.go:290: (dbg) kubectl --context functional-171063 describe pod busybox-mount sp-pod:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-171063/192.168.39.67
	Start Time:       Sat, 06 Dec 2025 08:40:20 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  cri-o://f2c53e3233352048354f73e24ebd59b38cec04468d76349cce715e88511004c6
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 06 Dec 2025 08:40:24 +0000
	      Finished:     Sat, 06 Dec 2025 08:40:24 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dcspm (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-dcspm:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  5m47s  default-scheduler  Successfully assigned default/busybox-mount to functional-171063
	  Normal  Pulling    5m46s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5m43s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 3.74s (3.74s including waiting). Image size: 4631262 bytes.
	  Normal  Created    5m43s  kubelet            Created container: mount-munger
	  Normal  Started    5m43s  kubelet            Started container mount-munger
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-171063/192.168.39.67
	Start Time:       Sat, 06 Dec 2025 08:40:05 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-m26dg (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-m26dg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  6m2s  default-scheduler  Successfully assigned default/sp-pod to functional-171063

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (370.01s)

                                                
                                    
x
+
TestPreload (159.79s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-109333 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio
E1206 09:29:04.656679    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/functional-118298/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:41: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-109333 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio: (1m39.180124309s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-109333 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-amd64 -p test-preload-109333 image pull gcr.io/k8s-minikube/busybox: (3.941805056s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-109333
E1206 09:29:56.581859    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/functional-171063/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:55: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-109333: (8.379342315s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-109333 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-109333 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (45.718390381s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-109333 image list
preload_test.go:73: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20250512-df8de77b

                                                
                                                
-- /stdout --
panic.go:615: *** TestPreload FAILED at 2025-12-06 09:30:44.206602094 +0000 UTC m=+3752.135225020
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-109333 -n test-preload-109333
helpers_test.go:252: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-109333 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p test-preload-109333 logs -n 25: (1.005656518s)
helpers_test.go:260: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-240535 ssh -n multinode-240535-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-240535     │ jenkins │ v1.37.0 │ 06 Dec 25 09:17 UTC │ 06 Dec 25 09:17 UTC │
	│ ssh     │ multinode-240535 ssh -n multinode-240535 sudo cat /home/docker/cp-test_multinode-240535-m03_multinode-240535.txt                                          │ multinode-240535     │ jenkins │ v1.37.0 │ 06 Dec 25 09:17 UTC │ 06 Dec 25 09:17 UTC │
	│ cp      │ multinode-240535 cp multinode-240535-m03:/home/docker/cp-test.txt multinode-240535-m02:/home/docker/cp-test_multinode-240535-m03_multinode-240535-m02.txt │ multinode-240535     │ jenkins │ v1.37.0 │ 06 Dec 25 09:17 UTC │ 06 Dec 25 09:17 UTC │
	│ ssh     │ multinode-240535 ssh -n multinode-240535-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-240535     │ jenkins │ v1.37.0 │ 06 Dec 25 09:17 UTC │ 06 Dec 25 09:17 UTC │
	│ ssh     │ multinode-240535 ssh -n multinode-240535-m02 sudo cat /home/docker/cp-test_multinode-240535-m03_multinode-240535-m02.txt                                  │ multinode-240535     │ jenkins │ v1.37.0 │ 06 Dec 25 09:17 UTC │ 06 Dec 25 09:17 UTC │
	│ node    │ multinode-240535 node stop m03                                                                                                                            │ multinode-240535     │ jenkins │ v1.37.0 │ 06 Dec 25 09:17 UTC │ 06 Dec 25 09:17 UTC │
	│ node    │ multinode-240535 node start m03 -v=5 --alsologtostderr                                                                                                    │ multinode-240535     │ jenkins │ v1.37.0 │ 06 Dec 25 09:17 UTC │ 06 Dec 25 09:17 UTC │
	│ node    │ list -p multinode-240535                                                                                                                                  │ multinode-240535     │ jenkins │ v1.37.0 │ 06 Dec 25 09:17 UTC │                     │
	│ stop    │ -p multinode-240535                                                                                                                                       │ multinode-240535     │ jenkins │ v1.37.0 │ 06 Dec 25 09:17 UTC │ 06 Dec 25 09:20 UTC │
	│ start   │ -p multinode-240535 --wait=true -v=5 --alsologtostderr                                                                                                    │ multinode-240535     │ jenkins │ v1.37.0 │ 06 Dec 25 09:20 UTC │ 06 Dec 25 09:22 UTC │
	│ node    │ list -p multinode-240535                                                                                                                                  │ multinode-240535     │ jenkins │ v1.37.0 │ 06 Dec 25 09:22 UTC │                     │
	│ node    │ multinode-240535 node delete m03                                                                                                                          │ multinode-240535     │ jenkins │ v1.37.0 │ 06 Dec 25 09:22 UTC │ 06 Dec 25 09:23 UTC │
	│ stop    │ multinode-240535 stop                                                                                                                                     │ multinode-240535     │ jenkins │ v1.37.0 │ 06 Dec 25 09:23 UTC │ 06 Dec 25 09:25 UTC │
	│ start   │ -p multinode-240535 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio                                                            │ multinode-240535     │ jenkins │ v1.37.0 │ 06 Dec 25 09:25 UTC │ 06 Dec 25 09:27 UTC │
	│ node    │ list -p multinode-240535                                                                                                                                  │ multinode-240535     │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │                     │
	│ start   │ -p multinode-240535-m02 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-240535-m02 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │                     │
	│ start   │ -p multinode-240535-m03 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-240535-m03 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │ 06 Dec 25 09:28 UTC │
	│ node    │ add -p multinode-240535                                                                                                                                   │ multinode-240535     │ jenkins │ v1.37.0 │ 06 Dec 25 09:28 UTC │                     │
	│ delete  │ -p multinode-240535-m03                                                                                                                                   │ multinode-240535-m03 │ jenkins │ v1.37.0 │ 06 Dec 25 09:28 UTC │ 06 Dec 25 09:28 UTC │
	│ delete  │ -p multinode-240535                                                                                                                                       │ multinode-240535     │ jenkins │ v1.37.0 │ 06 Dec 25 09:28 UTC │ 06 Dec 25 09:28 UTC │
	│ start   │ -p test-preload-109333 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio                                │ test-preload-109333  │ jenkins │ v1.37.0 │ 06 Dec 25 09:28 UTC │ 06 Dec 25 09:29 UTC │
	│ image   │ test-preload-109333 image pull gcr.io/k8s-minikube/busybox                                                                                                │ test-preload-109333  │ jenkins │ v1.37.0 │ 06 Dec 25 09:29 UTC │ 06 Dec 25 09:29 UTC │
	│ stop    │ -p test-preload-109333                                                                                                                                    │ test-preload-109333  │ jenkins │ v1.37.0 │ 06 Dec 25 09:29 UTC │ 06 Dec 25 09:29 UTC │
	│ start   │ -p test-preload-109333 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio                                          │ test-preload-109333  │ jenkins │ v1.37.0 │ 06 Dec 25 09:29 UTC │ 06 Dec 25 09:30 UTC │
	│ image   │ test-preload-109333 image list                                                                                                                            │ test-preload-109333  │ jenkins │ v1.37.0 │ 06 Dec 25 09:30 UTC │ 06 Dec 25 09:30 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:29:58
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:29:58.354117   36573 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:29:58.354364   36573 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:29:58.354373   36573 out.go:374] Setting ErrFile to fd 2...
	I1206 09:29:58.354377   36573 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:29:58.354581   36573 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5603/.minikube/bin
	I1206 09:29:58.354972   36573 out.go:368] Setting JSON to false
	I1206 09:29:58.355784   36573 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":4340,"bootTime":1765009058,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:29:58.355832   36573 start.go:143] virtualization: kvm guest
	I1206 09:29:58.357675   36573 out.go:179] * [test-preload-109333] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 09:29:58.358826   36573 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 09:29:58.358817   36573 notify.go:221] Checking for updates...
	I1206 09:29:58.359985   36573 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:29:58.361217   36573 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-5603/kubeconfig
	I1206 09:29:58.362405   36573 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-5603/.minikube
	I1206 09:29:58.363488   36573 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:29:58.364643   36573 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:29:58.366023   36573 config.go:182] Loaded profile config "test-preload-109333": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:29:58.366480   36573 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:29:58.399864   36573 out.go:179] * Using the kvm2 driver based on existing profile
	I1206 09:29:58.400850   36573 start.go:309] selected driver: kvm2
	I1206 09:29:58.400861   36573 start.go:927] validating driver "kvm2" against &{Name:test-preload-109333 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.34.2 ClusterName:test-preload-109333 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.120 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:29:58.400945   36573 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:29:58.401820   36573 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 09:29:58.401847   36573 cni.go:84] Creating CNI manager for ""
	I1206 09:29:58.401892   36573 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 09:29:58.401943   36573 start.go:353] cluster config:
	{Name:test-preload-109333 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:test-preload-109333 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.120 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:29:58.402052   36573 iso.go:125] acquiring lock: {Name:mk30cf35cfaf5c28a2b5f78c7b431de5eb8c8e82 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:29:58.403237   36573 out.go:179] * Starting "test-preload-109333" primary control-plane node in "test-preload-109333" cluster
	I1206 09:29:58.404114   36573 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 09:29:58.404137   36573 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22049-5603/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1206 09:29:58.404151   36573 cache.go:65] Caching tarball of preloaded images
	I1206 09:29:58.404233   36573 preload.go:238] Found /home/jenkins/minikube-integration/22049-5603/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1206 09:29:58.404245   36573 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1206 09:29:58.404319   36573 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/test-preload-109333/config.json ...
	I1206 09:29:58.404503   36573 start.go:360] acquireMachinesLock for test-preload-109333: {Name:mk3342af5720fb96b5115fa945410cab4f7bd1fb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1206 09:29:58.404545   36573 start.go:364] duration metric: took 26.823µs to acquireMachinesLock for "test-preload-109333"
	I1206 09:29:58.404558   36573 start.go:96] Skipping create...Using existing machine configuration
	I1206 09:29:58.404562   36573 fix.go:54] fixHost starting: 
	I1206 09:29:58.405963   36573 fix.go:112] recreateIfNeeded on test-preload-109333: state=Stopped err=<nil>
	W1206 09:29:58.405980   36573 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 09:29:58.407729   36573 out.go:252] * Restarting existing kvm2 VM for "test-preload-109333" ...
	I1206 09:29:58.407751   36573 main.go:143] libmachine: starting domain...
	I1206 09:29:58.407758   36573 main.go:143] libmachine: ensuring networks are active...
	I1206 09:29:58.408394   36573 main.go:143] libmachine: Ensuring network default is active
	I1206 09:29:58.408772   36573 main.go:143] libmachine: Ensuring network mk-test-preload-109333 is active
	I1206 09:29:58.409142   36573 main.go:143] libmachine: getting domain XML...
	I1206 09:29:58.410088   36573 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>test-preload-109333</name>
	  <uuid>156490f4-4865-46f9-8f6d-cdd2a335e3ca</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22049-5603/.minikube/machines/test-preload-109333/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22049-5603/.minikube/machines/test-preload-109333/test-preload-109333.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:7b:f8:ed'/>
	      <source network='mk-test-preload-109333'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:84:0f:bf'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1206 09:29:59.625215   36573 main.go:143] libmachine: waiting for domain to start...
	I1206 09:29:59.626433   36573 main.go:143] libmachine: domain is now running
	I1206 09:29:59.626448   36573 main.go:143] libmachine: waiting for IP...
	I1206 09:29:59.627197   36573 main.go:143] libmachine: domain test-preload-109333 has defined MAC address 52:54:00:7b:f8:ed in network mk-test-preload-109333
	I1206 09:29:59.627709   36573 main.go:143] libmachine: domain test-preload-109333 has current primary IP address 192.168.39.120 and MAC address 52:54:00:7b:f8:ed in network mk-test-preload-109333
	I1206 09:29:59.627721   36573 main.go:143] libmachine: found domain IP: 192.168.39.120
	I1206 09:29:59.627726   36573 main.go:143] libmachine: reserving static IP address...
	I1206 09:29:59.628118   36573 main.go:143] libmachine: found host DHCP lease matching {name: "test-preload-109333", mac: "52:54:00:7b:f8:ed", ip: "192.168.39.120"} in network mk-test-preload-109333: {Iface:virbr1 ExpiryTime:2025-12-06 10:28:22 +0000 UTC Type:0 Mac:52:54:00:7b:f8:ed Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:test-preload-109333 Clientid:01:52:54:00:7b:f8:ed}
	I1206 09:29:59.628141   36573 main.go:143] libmachine: skip adding static IP to network mk-test-preload-109333 - found existing host DHCP lease matching {name: "test-preload-109333", mac: "52:54:00:7b:f8:ed", ip: "192.168.39.120"}
	I1206 09:29:59.628151   36573 main.go:143] libmachine: reserved static IP address 192.168.39.120 for domain test-preload-109333
	I1206 09:29:59.628160   36573 main.go:143] libmachine: waiting for SSH...
	I1206 09:29:59.628167   36573 main.go:143] libmachine: Getting to WaitForSSH function...
	I1206 09:29:59.630203   36573 main.go:143] libmachine: domain test-preload-109333 has defined MAC address 52:54:00:7b:f8:ed in network mk-test-preload-109333
	I1206 09:29:59.630537   36573 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:f8:ed", ip: ""} in network mk-test-preload-109333: {Iface:virbr1 ExpiryTime:2025-12-06 10:28:22 +0000 UTC Type:0 Mac:52:54:00:7b:f8:ed Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:test-preload-109333 Clientid:01:52:54:00:7b:f8:ed}
	I1206 09:29:59.630564   36573 main.go:143] libmachine: domain test-preload-109333 has defined IP address 192.168.39.120 and MAC address 52:54:00:7b:f8:ed in network mk-test-preload-109333
	I1206 09:29:59.630718   36573 main.go:143] libmachine: Using SSH client type: native
	I1206 09:29:59.630921   36573 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I1206 09:29:59.630931   36573 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1206 09:30:02.697833   36573 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I1206 09:30:08.777767   36573 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: no route to host
	I1206 09:30:11.779103   36573 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.120:22: connect: connection refused
	I1206 09:30:14.885221   36573 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 09:30:14.888667   36573 main.go:143] libmachine: domain test-preload-109333 has defined MAC address 52:54:00:7b:f8:ed in network mk-test-preload-109333
	I1206 09:30:14.889076   36573 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:f8:ed", ip: ""} in network mk-test-preload-109333: {Iface:virbr1 ExpiryTime:2025-12-06 10:30:10 +0000 UTC Type:0 Mac:52:54:00:7b:f8:ed Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:test-preload-109333 Clientid:01:52:54:00:7b:f8:ed}
	I1206 09:30:14.889099   36573 main.go:143] libmachine: domain test-preload-109333 has defined IP address 192.168.39.120 and MAC address 52:54:00:7b:f8:ed in network mk-test-preload-109333
	I1206 09:30:14.889301   36573 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/test-preload-109333/config.json ...
	I1206 09:30:14.889502   36573 machine.go:94] provisionDockerMachine start ...
	I1206 09:30:14.891504   36573 main.go:143] libmachine: domain test-preload-109333 has defined MAC address 52:54:00:7b:f8:ed in network mk-test-preload-109333
	I1206 09:30:14.891850   36573 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:f8:ed", ip: ""} in network mk-test-preload-109333: {Iface:virbr1 ExpiryTime:2025-12-06 10:30:10 +0000 UTC Type:0 Mac:52:54:00:7b:f8:ed Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:test-preload-109333 Clientid:01:52:54:00:7b:f8:ed}
	I1206 09:30:14.891873   36573 main.go:143] libmachine: domain test-preload-109333 has defined IP address 192.168.39.120 and MAC address 52:54:00:7b:f8:ed in network mk-test-preload-109333
	I1206 09:30:14.892018   36573 main.go:143] libmachine: Using SSH client type: native
	I1206 09:30:14.892204   36573 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I1206 09:30:14.892214   36573 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 09:30:14.997771   36573 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1206 09:30:14.997798   36573 buildroot.go:166] provisioning hostname "test-preload-109333"
	I1206 09:30:15.000562   36573 main.go:143] libmachine: domain test-preload-109333 has defined MAC address 52:54:00:7b:f8:ed in network mk-test-preload-109333
	I1206 09:30:15.000956   36573 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:f8:ed", ip: ""} in network mk-test-preload-109333: {Iface:virbr1 ExpiryTime:2025-12-06 10:30:10 +0000 UTC Type:0 Mac:52:54:00:7b:f8:ed Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:test-preload-109333 Clientid:01:52:54:00:7b:f8:ed}
	I1206 09:30:15.000998   36573 main.go:143] libmachine: domain test-preload-109333 has defined IP address 192.168.39.120 and MAC address 52:54:00:7b:f8:ed in network mk-test-preload-109333
	I1206 09:30:15.001166   36573 main.go:143] libmachine: Using SSH client type: native
	I1206 09:30:15.001359   36573 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I1206 09:30:15.001371   36573 main.go:143] libmachine: About to run SSH command:
	sudo hostname test-preload-109333 && echo "test-preload-109333" | sudo tee /etc/hostname
	I1206 09:30:15.123544   36573 main.go:143] libmachine: SSH cmd err, output: <nil>: test-preload-109333
	
	I1206 09:30:15.126776   36573 main.go:143] libmachine: domain test-preload-109333 has defined MAC address 52:54:00:7b:f8:ed in network mk-test-preload-109333
	I1206 09:30:15.127244   36573 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:f8:ed", ip: ""} in network mk-test-preload-109333: {Iface:virbr1 ExpiryTime:2025-12-06 10:30:10 +0000 UTC Type:0 Mac:52:54:00:7b:f8:ed Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:test-preload-109333 Clientid:01:52:54:00:7b:f8:ed}
	I1206 09:30:15.127281   36573 main.go:143] libmachine: domain test-preload-109333 has defined IP address 192.168.39.120 and MAC address 52:54:00:7b:f8:ed in network mk-test-preload-109333
	I1206 09:30:15.127522   36573 main.go:143] libmachine: Using SSH client type: native
	I1206 09:30:15.127789   36573 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I1206 09:30:15.127818   36573 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-109333' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-109333/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-109333' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 09:30:15.244948   36573 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 09:30:15.244977   36573 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22049-5603/.minikube CaCertPath:/home/jenkins/minikube-integration/22049-5603/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22049-5603/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22049-5603/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22049-5603/.minikube}
	I1206 09:30:15.244996   36573 buildroot.go:174] setting up certificates
	I1206 09:30:15.245008   36573 provision.go:84] configureAuth start
	I1206 09:30:15.248185   36573 main.go:143] libmachine: domain test-preload-109333 has defined MAC address 52:54:00:7b:f8:ed in network mk-test-preload-109333
	I1206 09:30:15.248606   36573 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:f8:ed", ip: ""} in network mk-test-preload-109333: {Iface:virbr1 ExpiryTime:2025-12-06 10:30:10 +0000 UTC Type:0 Mac:52:54:00:7b:f8:ed Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:test-preload-109333 Clientid:01:52:54:00:7b:f8:ed}
	I1206 09:30:15.248640   36573 main.go:143] libmachine: domain test-preload-109333 has defined IP address 192.168.39.120 and MAC address 52:54:00:7b:f8:ed in network mk-test-preload-109333
	I1206 09:30:15.250912   36573 main.go:143] libmachine: domain test-preload-109333 has defined MAC address 52:54:00:7b:f8:ed in network mk-test-preload-109333
	I1206 09:30:15.251234   36573 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:f8:ed", ip: ""} in network mk-test-preload-109333: {Iface:virbr1 ExpiryTime:2025-12-06 10:30:10 +0000 UTC Type:0 Mac:52:54:00:7b:f8:ed Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:test-preload-109333 Clientid:01:52:54:00:7b:f8:ed}
	I1206 09:30:15.251253   36573 main.go:143] libmachine: domain test-preload-109333 has defined IP address 192.168.39.120 and MAC address 52:54:00:7b:f8:ed in network mk-test-preload-109333
	I1206 09:30:15.251362   36573 provision.go:143] copyHostCerts
	I1206 09:30:15.251406   36573 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-5603/.minikube/key.pem, removing ...
	I1206 09:30:15.251416   36573 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-5603/.minikube/key.pem
	I1206 09:30:15.251498   36573 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-5603/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22049-5603/.minikube/key.pem (1675 bytes)
	I1206 09:30:15.251589   36573 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-5603/.minikube/ca.pem, removing ...
	I1206 09:30:15.251598   36573 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-5603/.minikube/ca.pem
	I1206 09:30:15.251626   36573 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-5603/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22049-5603/.minikube/ca.pem (1082 bytes)
	I1206 09:30:15.251689   36573 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-5603/.minikube/cert.pem, removing ...
	I1206 09:30:15.251697   36573 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-5603/.minikube/cert.pem
	I1206 09:30:15.251721   36573 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-5603/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22049-5603/.minikube/cert.pem (1123 bytes)
	I1206 09:30:15.251766   36573 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22049-5603/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22049-5603/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22049-5603/.minikube/certs/ca-key.pem org=jenkins.test-preload-109333 san=[127.0.0.1 192.168.39.120 localhost minikube test-preload-109333]
	I1206 09:30:15.307569   36573 provision.go:177] copyRemoteCerts
	I1206 09:30:15.307623   36573 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 09:30:15.309917   36573 main.go:143] libmachine: domain test-preload-109333 has defined MAC address 52:54:00:7b:f8:ed in network mk-test-preload-109333
	I1206 09:30:15.310240   36573 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:f8:ed", ip: ""} in network mk-test-preload-109333: {Iface:virbr1 ExpiryTime:2025-12-06 10:30:10 +0000 UTC Type:0 Mac:52:54:00:7b:f8:ed Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:test-preload-109333 Clientid:01:52:54:00:7b:f8:ed}
	I1206 09:30:15.310258   36573 main.go:143] libmachine: domain test-preload-109333 has defined IP address 192.168.39.120 and MAC address 52:54:00:7b:f8:ed in network mk-test-preload-109333
	I1206 09:30:15.310366   36573 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/machines/test-preload-109333/id_rsa Username:docker}
	I1206 09:30:15.396133   36573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5603/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 09:30:15.428549   36573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5603/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1206 09:30:15.461246   36573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5603/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 09:30:15.493236   36573 provision.go:87] duration metric: took 248.21726ms to configureAuth
	I1206 09:30:15.493265   36573 buildroot.go:189] setting minikube options for container-runtime
	I1206 09:30:15.493502   36573 config.go:182] Loaded profile config "test-preload-109333": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:30:15.496566   36573 main.go:143] libmachine: domain test-preload-109333 has defined MAC address 52:54:00:7b:f8:ed in network mk-test-preload-109333
	I1206 09:30:15.497038   36573 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:f8:ed", ip: ""} in network mk-test-preload-109333: {Iface:virbr1 ExpiryTime:2025-12-06 10:30:10 +0000 UTC Type:0 Mac:52:54:00:7b:f8:ed Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:test-preload-109333 Clientid:01:52:54:00:7b:f8:ed}
	I1206 09:30:15.497087   36573 main.go:143] libmachine: domain test-preload-109333 has defined IP address 192.168.39.120 and MAC address 52:54:00:7b:f8:ed in network mk-test-preload-109333
	I1206 09:30:15.497269   36573 main.go:143] libmachine: Using SSH client type: native
	I1206 09:30:15.497502   36573 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I1206 09:30:15.497534   36573 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 09:30:15.747396   36573 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 09:30:15.747421   36573 machine.go:97] duration metric: took 857.90733ms to provisionDockerMachine
	I1206 09:30:15.747432   36573 start.go:293] postStartSetup for "test-preload-109333" (driver="kvm2")
	I1206 09:30:15.747442   36573 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 09:30:15.747530   36573 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 09:30:15.750224   36573 main.go:143] libmachine: domain test-preload-109333 has defined MAC address 52:54:00:7b:f8:ed in network mk-test-preload-109333
	I1206 09:30:15.750642   36573 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:f8:ed", ip: ""} in network mk-test-preload-109333: {Iface:virbr1 ExpiryTime:2025-12-06 10:30:10 +0000 UTC Type:0 Mac:52:54:00:7b:f8:ed Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:test-preload-109333 Clientid:01:52:54:00:7b:f8:ed}
	I1206 09:30:15.750663   36573 main.go:143] libmachine: domain test-preload-109333 has defined IP address 192.168.39.120 and MAC address 52:54:00:7b:f8:ed in network mk-test-preload-109333
	I1206 09:30:15.750807   36573 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/machines/test-preload-109333/id_rsa Username:docker}
	I1206 09:30:15.839887   36573 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 09:30:15.845379   36573 info.go:137] Remote host: Buildroot 2025.02
	I1206 09:30:15.845405   36573 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-5603/.minikube/addons for local assets ...
	I1206 09:30:15.845516   36573 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-5603/.minikube/files for local assets ...
	I1206 09:30:15.845623   36573 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-5603/.minikube/files/etc/ssl/certs/95522.pem -> 95522.pem in /etc/ssl/certs
	I1206 09:30:15.845762   36573 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 09:30:15.859933   36573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5603/.minikube/files/etc/ssl/certs/95522.pem --> /etc/ssl/certs/95522.pem (1708 bytes)
	I1206 09:30:15.906143   36573 start.go:296] duration metric: took 158.698433ms for postStartSetup
	I1206 09:30:15.906182   36573 fix.go:56] duration metric: took 17.501619558s for fixHost
	I1206 09:30:15.909017   36573 main.go:143] libmachine: domain test-preload-109333 has defined MAC address 52:54:00:7b:f8:ed in network mk-test-preload-109333
	I1206 09:30:15.909543   36573 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:f8:ed", ip: ""} in network mk-test-preload-109333: {Iface:virbr1 ExpiryTime:2025-12-06 10:30:10 +0000 UTC Type:0 Mac:52:54:00:7b:f8:ed Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:test-preload-109333 Clientid:01:52:54:00:7b:f8:ed}
	I1206 09:30:15.909572   36573 main.go:143] libmachine: domain test-preload-109333 has defined IP address 192.168.39.120 and MAC address 52:54:00:7b:f8:ed in network mk-test-preload-109333
	I1206 09:30:15.909779   36573 main.go:143] libmachine: Using SSH client type: native
	I1206 09:30:15.910019   36573 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I1206 09:30:15.910034   36573 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1206 09:30:16.015521   36573 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765013415.965935665
	
	I1206 09:30:16.015550   36573 fix.go:216] guest clock: 1765013415.965935665
	I1206 09:30:16.015557   36573 fix.go:229] Guest: 2025-12-06 09:30:15.965935665 +0000 UTC Remote: 2025-12-06 09:30:15.906186822 +0000 UTC m=+17.597866594 (delta=59.748843ms)
	I1206 09:30:16.015572   36573 fix.go:200] guest clock delta is within tolerance: 59.748843ms
	I1206 09:30:16.015577   36573 start.go:83] releasing machines lock for "test-preload-109333", held for 17.611024053s
	I1206 09:30:16.018629   36573 main.go:143] libmachine: domain test-preload-109333 has defined MAC address 52:54:00:7b:f8:ed in network mk-test-preload-109333
	I1206 09:30:16.019021   36573 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:f8:ed", ip: ""} in network mk-test-preload-109333: {Iface:virbr1 ExpiryTime:2025-12-06 10:30:10 +0000 UTC Type:0 Mac:52:54:00:7b:f8:ed Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:test-preload-109333 Clientid:01:52:54:00:7b:f8:ed}
	I1206 09:30:16.019045   36573 main.go:143] libmachine: domain test-preload-109333 has defined IP address 192.168.39.120 and MAC address 52:54:00:7b:f8:ed in network mk-test-preload-109333
	I1206 09:30:16.019658   36573 ssh_runner.go:195] Run: cat /version.json
	I1206 09:30:16.019726   36573 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 09:30:16.022718   36573 main.go:143] libmachine: domain test-preload-109333 has defined MAC address 52:54:00:7b:f8:ed in network mk-test-preload-109333
	I1206 09:30:16.022778   36573 main.go:143] libmachine: domain test-preload-109333 has defined MAC address 52:54:00:7b:f8:ed in network mk-test-preload-109333
	I1206 09:30:16.023151   36573 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:f8:ed", ip: ""} in network mk-test-preload-109333: {Iface:virbr1 ExpiryTime:2025-12-06 10:30:10 +0000 UTC Type:0 Mac:52:54:00:7b:f8:ed Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:test-preload-109333 Clientid:01:52:54:00:7b:f8:ed}
	I1206 09:30:16.023192   36573 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:f8:ed", ip: ""} in network mk-test-preload-109333: {Iface:virbr1 ExpiryTime:2025-12-06 10:30:10 +0000 UTC Type:0 Mac:52:54:00:7b:f8:ed Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:test-preload-109333 Clientid:01:52:54:00:7b:f8:ed}
	I1206 09:30:16.023200   36573 main.go:143] libmachine: domain test-preload-109333 has defined IP address 192.168.39.120 and MAC address 52:54:00:7b:f8:ed in network mk-test-preload-109333
	I1206 09:30:16.023219   36573 main.go:143] libmachine: domain test-preload-109333 has defined IP address 192.168.39.120 and MAC address 52:54:00:7b:f8:ed in network mk-test-preload-109333
	I1206 09:30:16.023394   36573 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/machines/test-preload-109333/id_rsa Username:docker}
	I1206 09:30:16.023588   36573 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/machines/test-preload-109333/id_rsa Username:docker}
	I1206 09:30:16.101608   36573 ssh_runner.go:195] Run: systemctl --version
	I1206 09:30:16.135935   36573 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 09:30:16.283409   36573 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 09:30:16.290919   36573 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 09:30:16.291010   36573 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 09:30:16.312758   36573 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1206 09:30:16.312783   36573 start.go:496] detecting cgroup driver to use...
	I1206 09:30:16.312838   36573 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 09:30:16.333524   36573 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 09:30:16.351568   36573 docker.go:218] disabling cri-docker service (if available) ...
	I1206 09:30:16.351646   36573 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 09:30:16.375685   36573 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 09:30:16.393704   36573 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 09:30:16.543953   36573 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 09:30:16.774620   36573 docker.go:234] disabling docker service ...
	I1206 09:30:16.774701   36573 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 09:30:16.792515   36573 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 09:30:16.808904   36573 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 09:30:16.971751   36573 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 09:30:17.126990   36573 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 09:30:17.144765   36573 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 09:30:17.170543   36573 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 09:30:17.170611   36573 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:30:17.183917   36573 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 09:30:17.183999   36573 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:30:17.197677   36573 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:30:17.211832   36573 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:30:17.225517   36573 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 09:30:17.240553   36573 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:30:17.254522   36573 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:30:17.277522   36573 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:30:17.291098   36573 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 09:30:17.303009   36573 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1206 09:30:17.303086   36573 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1206 09:30:17.325573   36573 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 09:30:17.338489   36573 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:30:17.486640   36573 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 09:30:17.607161   36573 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 09:30:17.607241   36573 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 09:30:17.613350   36573 start.go:564] Will wait 60s for crictl version
	I1206 09:30:17.613425   36573 ssh_runner.go:195] Run: which crictl
	I1206 09:30:17.618345   36573 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1206 09:30:17.657880   36573 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1206 09:30:17.657966   36573 ssh_runner.go:195] Run: crio --version
	I1206 09:30:17.689986   36573 ssh_runner.go:195] Run: crio --version
	I1206 09:30:17.725715   36573 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1206 09:30:17.729794   36573 main.go:143] libmachine: domain test-preload-109333 has defined MAC address 52:54:00:7b:f8:ed in network mk-test-preload-109333
	I1206 09:30:17.730179   36573 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:f8:ed", ip: ""} in network mk-test-preload-109333: {Iface:virbr1 ExpiryTime:2025-12-06 10:30:10 +0000 UTC Type:0 Mac:52:54:00:7b:f8:ed Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:test-preload-109333 Clientid:01:52:54:00:7b:f8:ed}
	I1206 09:30:17.730205   36573 main.go:143] libmachine: domain test-preload-109333 has defined IP address 192.168.39.120 and MAC address 52:54:00:7b:f8:ed in network mk-test-preload-109333
	I1206 09:30:17.730380   36573 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1206 09:30:17.735481   36573 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:30:17.752186   36573 kubeadm.go:884] updating cluster {Name:test-preload-109333 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.34.2 ClusterName:test-preload-109333 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.120 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 09:30:17.752346   36573 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 09:30:17.752397   36573 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:30:17.789717   36573 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.2". assuming images are not preloaded.
	I1206 09:30:17.789794   36573 ssh_runner.go:195] Run: which lz4
	I1206 09:30:17.794841   36573 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1206 09:30:17.800388   36573 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1206 09:30:17.800425   36573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5603/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340306595 bytes)
	I1206 09:30:19.239638   36573 crio.go:462] duration metric: took 1.444832273s to copy over tarball
	I1206 09:30:19.239728   36573 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1206 09:30:20.760883   36573 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.521123671s)
	I1206 09:30:20.760914   36573 crio.go:469] duration metric: took 1.521232455s to extract the tarball
	I1206 09:30:20.760922   36573 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1206 09:30:20.799398   36573 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:30:20.840115   36573 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:30:20.840138   36573 cache_images.go:86] Images are preloaded, skipping loading
	I1206 09:30:20.840145   36573 kubeadm.go:935] updating node { 192.168.39.120 8443 v1.34.2 crio true true} ...
	I1206 09:30:20.840295   36573 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-109333 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.120
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:test-preload-109333 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 09:30:20.840373   36573 ssh_runner.go:195] Run: crio config
	I1206 09:30:20.891752   36573 cni.go:84] Creating CNI manager for ""
	I1206 09:30:20.891776   36573 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 09:30:20.891792   36573 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 09:30:20.891811   36573 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.120 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-109333 NodeName:test-preload-109333 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.120"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.120 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 09:30:20.891961   36573 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.120
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-109333"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.120"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.120"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 09:30:20.892031   36573 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1206 09:30:20.905250   36573 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 09:30:20.905341   36573 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 09:30:20.918114   36573 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I1206 09:30:20.940582   36573 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 09:30:20.962451   36573 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I1206 09:30:20.986179   36573 ssh_runner.go:195] Run: grep 192.168.39.120	control-plane.minikube.internal$ /etc/hosts
	I1206 09:30:20.991055   36573 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.120	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:30:21.007425   36573 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:30:21.153832   36573 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:30:21.175522   36573 certs.go:69] Setting up /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/test-preload-109333 for IP: 192.168.39.120
	I1206 09:30:21.175544   36573 certs.go:195] generating shared ca certs ...
	I1206 09:30:21.175559   36573 certs.go:227] acquiring lock for ca certs: {Name:mk000359972764fead2b3aaf8b843862aa35270c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:30:21.175709   36573 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22049-5603/.minikube/ca.key
	I1206 09:30:21.175760   36573 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22049-5603/.minikube/proxy-client-ca.key
	I1206 09:30:21.175770   36573 certs.go:257] generating profile certs ...
	I1206 09:30:21.175843   36573 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/test-preload-109333/client.key
	I1206 09:30:21.175900   36573 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/test-preload-109333/apiserver.key.f6855f9f
	I1206 09:30:21.175943   36573 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/test-preload-109333/proxy-client.key
	I1206 09:30:21.176046   36573 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5603/.minikube/certs/9552.pem (1338 bytes)
	W1206 09:30:21.176078   36573 certs.go:480] ignoring /home/jenkins/minikube-integration/22049-5603/.minikube/certs/9552_empty.pem, impossibly tiny 0 bytes
	I1206 09:30:21.176088   36573 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5603/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 09:30:21.176110   36573 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5603/.minikube/certs/ca.pem (1082 bytes)
	I1206 09:30:21.176133   36573 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5603/.minikube/certs/cert.pem (1123 bytes)
	I1206 09:30:21.176155   36573 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5603/.minikube/certs/key.pem (1675 bytes)
	I1206 09:30:21.176194   36573 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5603/.minikube/files/etc/ssl/certs/95522.pem (1708 bytes)
	I1206 09:30:21.176740   36573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5603/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 09:30:21.226004   36573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5603/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 09:30:21.268065   36573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5603/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 09:30:21.302515   36573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5603/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 09:30:21.335003   36573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/test-preload-109333/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1206 09:30:21.367641   36573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/test-preload-109333/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 09:30:21.398957   36573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/test-preload-109333/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 09:30:21.430940   36573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/test-preload-109333/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 09:30:21.464407   36573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5603/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 09:30:21.497229   36573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5603/.minikube/certs/9552.pem --> /usr/share/ca-certificates/9552.pem (1338 bytes)
	I1206 09:30:21.529488   36573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5603/.minikube/files/etc/ssl/certs/95522.pem --> /usr/share/ca-certificates/95522.pem (1708 bytes)
	I1206 09:30:21.562820   36573 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 09:30:21.586058   36573 ssh_runner.go:195] Run: openssl version
	I1206 09:30:21.593636   36573 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:30:21.606528   36573 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 09:30:21.619814   36573 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:30:21.625920   36573 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:30:21.625996   36573 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:30:21.634586   36573 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 09:30:21.647879   36573 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1206 09:30:21.660738   36573 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9552.pem
	I1206 09:30:21.673492   36573 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9552.pem /etc/ssl/certs/9552.pem
	I1206 09:30:21.686423   36573 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9552.pem
	I1206 09:30:21.692662   36573 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 08:46 /usr/share/ca-certificates/9552.pem
	I1206 09:30:21.692741   36573 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9552.pem
	I1206 09:30:21.700745   36573 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 09:30:21.713985   36573 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/9552.pem /etc/ssl/certs/51391683.0
	I1206 09:30:21.727067   36573 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/95522.pem
	I1206 09:30:21.741085   36573 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/95522.pem /etc/ssl/certs/95522.pem
	I1206 09:30:21.754236   36573 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/95522.pem
	I1206 09:30:21.760257   36573 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 08:46 /usr/share/ca-certificates/95522.pem
	I1206 09:30:21.760330   36573 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/95522.pem
	I1206 09:30:21.768192   36573 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 09:30:21.780934   36573 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/95522.pem /etc/ssl/certs/3ec20f2e.0
	I1206 09:30:21.793870   36573 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 09:30:21.800139   36573 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 09:30:21.808501   36573 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 09:30:21.816823   36573 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 09:30:21.825334   36573 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 09:30:21.833910   36573 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 09:30:21.842368   36573 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 09:30:21.850555   36573 kubeadm.go:401] StartCluster: {Name:test-preload-109333 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
34.2 ClusterName:test-preload-109333 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.120 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:30:21.850631   36573 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 09:30:21.850720   36573 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 09:30:21.891890   36573 cri.go:89] found id: ""
	I1206 09:30:21.891971   36573 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 09:30:21.908271   36573 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 09:30:21.908300   36573 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 09:30:21.908377   36573 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 09:30:21.923828   36573 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 09:30:21.924460   36573 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-109333" does not appear in /home/jenkins/minikube-integration/22049-5603/kubeconfig
	I1206 09:30:21.924645   36573 kubeconfig.go:62] /home/jenkins/minikube-integration/22049-5603/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-109333" cluster setting kubeconfig missing "test-preload-109333" context setting]
	I1206 09:30:21.925026   36573 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5603/kubeconfig: {Name:mk8c42c505f5f7f0ebf46166194656af7c5589e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:30:21.925790   36573 kapi.go:59] client config for test-preload-109333: &rest.Config{Host:"https://192.168.39.120:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22049-5603/.minikube/profiles/test-preload-109333/client.crt", KeyFile:"/home/jenkins/minikube-integration/22049-5603/.minikube/profiles/test-preload-109333/client.key", CAFile:"/home/jenkins/minikube-integration/22049-5603/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28156e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 09:30:21.926366   36573 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1206 09:30:21.926390   36573 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1206 09:30:21.926402   36573 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1206 09:30:21.926417   36573 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1206 09:30:21.926426   36573 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1206 09:30:21.926914   36573 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 09:30:21.947614   36573 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.39.120
	I1206 09:30:21.947660   36573 kubeadm.go:1161] stopping kube-system containers ...
	I1206 09:30:21.947677   36573 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1206 09:30:21.947758   36573 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 09:30:21.999872   36573 cri.go:89] found id: ""
	I1206 09:30:21.999951   36573 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1206 09:30:22.020220   36573 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 09:30:22.033978   36573 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 09:30:22.034003   36573 kubeadm.go:158] found existing configuration files:
	
	I1206 09:30:22.034047   36573 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 09:30:22.046040   36573 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 09:30:22.046109   36573 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 09:30:22.059232   36573 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 09:30:22.071603   36573 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 09:30:22.071676   36573 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 09:30:22.084894   36573 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 09:30:22.097143   36573 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 09:30:22.097215   36573 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 09:30:22.111129   36573 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 09:30:22.124033   36573 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 09:30:22.124106   36573 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 09:30:22.136963   36573 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 09:30:22.149673   36573 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 09:30:22.212066   36573 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 09:30:23.140549   36573 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1206 09:30:23.399641   36573 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 09:30:23.469275   36573 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1206 09:30:23.546909   36573 api_server.go:52] waiting for apiserver process to appear ...
	I1206 09:30:23.546993   36573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:30:24.047149   36573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:30:24.547783   36573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:30:25.048122   36573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:30:25.548048   36573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:30:26.047818   36573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:30:26.080609   36573 api_server.go:72] duration metric: took 2.533714968s to wait for apiserver process to appear ...
	I1206 09:30:26.080640   36573 api_server.go:88] waiting for apiserver healthz status ...
	I1206 09:30:26.080661   36573 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I1206 09:30:28.211191   36573 api_server.go:279] https://192.168.39.120:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1206 09:30:28.211238   36573 api_server.go:103] status: https://192.168.39.120:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1206 09:30:28.211256   36573 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I1206 09:30:28.265772   36573 api_server.go:279] https://192.168.39.120:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1206 09:30:28.265805   36573 api_server.go:103] status: https://192.168.39.120:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1206 09:30:28.581706   36573 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I1206 09:30:28.598700   36573 api_server.go:279] https://192.168.39.120:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1206 09:30:28.598729   36573 api_server.go:103] status: https://192.168.39.120:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1206 09:30:29.081491   36573 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I1206 09:30:29.089171   36573 api_server.go:279] https://192.168.39.120:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1206 09:30:29.089198   36573 api_server.go:103] status: https://192.168.39.120:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1206 09:30:29.580848   36573 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I1206 09:30:29.585701   36573 api_server.go:279] https://192.168.39.120:8443/healthz returned 200:
	ok
	I1206 09:30:29.594076   36573 api_server.go:141] control plane version: v1.34.2
	I1206 09:30:29.594101   36573 api_server.go:131] duration metric: took 3.513455932s to wait for apiserver health ...
	I1206 09:30:29.594110   36573 cni.go:84] Creating CNI manager for ""
	I1206 09:30:29.594116   36573 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 09:30:29.596264   36573 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1206 09:30:29.597784   36573 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1206 09:30:29.624521   36573 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1206 09:30:29.652361   36573 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 09:30:29.659784   36573 system_pods.go:59] 7 kube-system pods found
	I1206 09:30:29.659817   36573 system_pods.go:61] "coredns-66bc5c9577-gb46m" [af079579-359c-4a43-841d-0abf03d70d5a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:30:29.659824   36573 system_pods.go:61] "etcd-test-preload-109333" [42a2dd97-5ea9-4048-a1f8-817fa5f95e2c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1206 09:30:29.659833   36573 system_pods.go:61] "kube-apiserver-test-preload-109333" [d18f7d04-8305-4e86-afe0-660fd13618e8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1206 09:30:29.659839   36573 system_pods.go:61] "kube-controller-manager-test-preload-109333" [58243c77-fc49-4b8d-8f6c-da75143e0a7f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1206 09:30:29.659844   36573 system_pods.go:61] "kube-proxy-z29gd" [9a27e049-4998-470c-8e52-379de1e010d2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1206 09:30:29.659849   36573 system_pods.go:61] "kube-scheduler-test-preload-109333" [140a732f-811b-42e0-9b06-ca310cf11882] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1206 09:30:29.659855   36573 system_pods.go:61] "storage-provisioner" [0baac458-b032-476e-9a0c-a3883e13328d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 09:30:29.659863   36573 system_pods.go:74] duration metric: took 7.481655ms to wait for pod list to return data ...
	I1206 09:30:29.659872   36573 node_conditions.go:102] verifying NodePressure condition ...
	I1206 09:30:29.664463   36573 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1206 09:30:29.664509   36573 node_conditions.go:123] node cpu capacity is 2
	I1206 09:30:29.664527   36573 node_conditions.go:105] duration metric: took 4.649229ms to run NodePressure ...
	I1206 09:30:29.664577   36573 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 09:30:29.983217   36573 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1206 09:30:29.988059   36573 kubeadm.go:744] kubelet initialised
	I1206 09:30:29.988081   36573 kubeadm.go:745] duration metric: took 4.839827ms waiting for restarted kubelet to initialise ...
	I1206 09:30:29.988096   36573 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1206 09:30:30.005013   36573 ops.go:34] apiserver oom_adj: -16
	I1206 09:30:30.005041   36573 kubeadm.go:602] duration metric: took 8.096734966s to restartPrimaryControlPlane
	I1206 09:30:30.005052   36573 kubeadm.go:403] duration metric: took 8.15450611s to StartCluster
	I1206 09:30:30.005067   36573 settings.go:142] acquiring lock: {Name:mk1c4376642fa0e1442961c9690dcfd3d7346ba2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:30:30.005144   36573 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22049-5603/kubeconfig
	I1206 09:30:30.005767   36573 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22049-5603/kubeconfig: {Name:mk8c42c505f5f7f0ebf46166194656af7c5589e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:30:30.006017   36573 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.120 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 09:30:30.006087   36573 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 09:30:30.006179   36573 addons.go:70] Setting storage-provisioner=true in profile "test-preload-109333"
	I1206 09:30:30.006198   36573 addons.go:239] Setting addon storage-provisioner=true in "test-preload-109333"
	W1206 09:30:30.006204   36573 addons.go:248] addon storage-provisioner should already be in state true
	I1206 09:30:30.006199   36573 addons.go:70] Setting default-storageclass=true in profile "test-preload-109333"
	I1206 09:30:30.006230   36573 host.go:66] Checking if "test-preload-109333" exists ...
	I1206 09:30:30.006204   36573 config.go:182] Loaded profile config "test-preload-109333": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:30:30.006232   36573 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "test-preload-109333"
	I1206 09:30:30.007718   36573 out.go:179] * Verifying Kubernetes components...
	I1206 09:30:30.008747   36573 kapi.go:59] client config for test-preload-109333: &rest.Config{Host:"https://192.168.39.120:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22049-5603/.minikube/profiles/test-preload-109333/client.crt", KeyFile:"/home/jenkins/minikube-integration/22049-5603/.minikube/profiles/test-preload-109333/client.key", CAFile:"/home/jenkins/minikube-integration/22049-5603/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28156e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 09:30:30.009079   36573 addons.go:239] Setting addon default-storageclass=true in "test-preload-109333"
	W1206 09:30:30.009094   36573 addons.go:248] addon default-storageclass should already be in state true
	I1206 09:30:30.009111   36573 host.go:66] Checking if "test-preload-109333" exists ...
	I1206 09:30:30.009185   36573 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 09:30:30.009233   36573 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:30:30.010327   36573 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:30:30.010343   36573 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 09:30:30.010939   36573 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 09:30:30.010958   36573 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 09:30:30.013549   36573 main.go:143] libmachine: domain test-preload-109333 has defined MAC address 52:54:00:7b:f8:ed in network mk-test-preload-109333
	I1206 09:30:30.014017   36573 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:f8:ed", ip: ""} in network mk-test-preload-109333: {Iface:virbr1 ExpiryTime:2025-12-06 10:30:10 +0000 UTC Type:0 Mac:52:54:00:7b:f8:ed Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:test-preload-109333 Clientid:01:52:54:00:7b:f8:ed}
	I1206 09:30:30.014030   36573 main.go:143] libmachine: domain test-preload-109333 has defined MAC address 52:54:00:7b:f8:ed in network mk-test-preload-109333
	I1206 09:30:30.014070   36573 main.go:143] libmachine: domain test-preload-109333 has defined IP address 192.168.39.120 and MAC address 52:54:00:7b:f8:ed in network mk-test-preload-109333
	I1206 09:30:30.014284   36573 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/machines/test-preload-109333/id_rsa Username:docker}
	I1206 09:30:30.014529   36573 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:f8:ed", ip: ""} in network mk-test-preload-109333: {Iface:virbr1 ExpiryTime:2025-12-06 10:30:10 +0000 UTC Type:0 Mac:52:54:00:7b:f8:ed Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:test-preload-109333 Clientid:01:52:54:00:7b:f8:ed}
	I1206 09:30:30.014556   36573 main.go:143] libmachine: domain test-preload-109333 has defined IP address 192.168.39.120 and MAC address 52:54:00:7b:f8:ed in network mk-test-preload-109333
	I1206 09:30:30.014685   36573 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/machines/test-preload-109333/id_rsa Username:docker}
	I1206 09:30:30.222892   36573 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:30:30.247761   36573 node_ready.go:35] waiting up to 6m0s for node "test-preload-109333" to be "Ready" ...
	I1206 09:30:30.251105   36573 node_ready.go:49] node "test-preload-109333" is "Ready"
	I1206 09:30:30.251133   36573 node_ready.go:38] duration metric: took 3.323793ms for node "test-preload-109333" to be "Ready" ...
	I1206 09:30:30.251145   36573 api_server.go:52] waiting for apiserver process to appear ...
	I1206 09:30:30.251199   36573 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:30:30.274626   36573 api_server.go:72] duration metric: took 268.573047ms to wait for apiserver process to appear ...
	I1206 09:30:30.274663   36573 api_server.go:88] waiting for apiserver healthz status ...
	I1206 09:30:30.274695   36573 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I1206 09:30:30.280292   36573 api_server.go:279] https://192.168.39.120:8443/healthz returned 200:
	ok
	I1206 09:30:30.282166   36573 api_server.go:141] control plane version: v1.34.2
	I1206 09:30:30.282187   36573 api_server.go:131] duration metric: took 7.516218ms to wait for apiserver health ...
	I1206 09:30:30.282195   36573 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 09:30:30.296860   36573 system_pods.go:59] 7 kube-system pods found
	I1206 09:30:30.296889   36573 system_pods.go:61] "coredns-66bc5c9577-gb46m" [af079579-359c-4a43-841d-0abf03d70d5a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:30:30.296895   36573 system_pods.go:61] "etcd-test-preload-109333" [42a2dd97-5ea9-4048-a1f8-817fa5f95e2c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1206 09:30:30.296904   36573 system_pods.go:61] "kube-apiserver-test-preload-109333" [d18f7d04-8305-4e86-afe0-660fd13618e8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1206 09:30:30.296910   36573 system_pods.go:61] "kube-controller-manager-test-preload-109333" [58243c77-fc49-4b8d-8f6c-da75143e0a7f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1206 09:30:30.296914   36573 system_pods.go:61] "kube-proxy-z29gd" [9a27e049-4998-470c-8e52-379de1e010d2] Running
	I1206 09:30:30.296920   36573 system_pods.go:61] "kube-scheduler-test-preload-109333" [140a732f-811b-42e0-9b06-ca310cf11882] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1206 09:30:30.296923   36573 system_pods.go:61] "storage-provisioner" [0baac458-b032-476e-9a0c-a3883e13328d] Running
	I1206 09:30:30.296929   36573 system_pods.go:74] duration metric: took 14.729012ms to wait for pod list to return data ...
	I1206 09:30:30.296936   36573 default_sa.go:34] waiting for default service account to be created ...
	I1206 09:30:30.304700   36573 default_sa.go:45] found service account: "default"
	I1206 09:30:30.304724   36573 default_sa.go:55] duration metric: took 7.783068ms for default service account to be created ...
	I1206 09:30:30.304735   36573 system_pods.go:116] waiting for k8s-apps to be running ...
	I1206 09:30:30.311138   36573 system_pods.go:86] 7 kube-system pods found
	I1206 09:30:30.311166   36573 system_pods.go:89] "coredns-66bc5c9577-gb46m" [af079579-359c-4a43-841d-0abf03d70d5a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:30:30.311173   36573 system_pods.go:89] "etcd-test-preload-109333" [42a2dd97-5ea9-4048-a1f8-817fa5f95e2c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1206 09:30:30.311181   36573 system_pods.go:89] "kube-apiserver-test-preload-109333" [d18f7d04-8305-4e86-afe0-660fd13618e8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1206 09:30:30.311187   36573 system_pods.go:89] "kube-controller-manager-test-preload-109333" [58243c77-fc49-4b8d-8f6c-da75143e0a7f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1206 09:30:30.311192   36573 system_pods.go:89] "kube-proxy-z29gd" [9a27e049-4998-470c-8e52-379de1e010d2] Running
	I1206 09:30:30.311197   36573 system_pods.go:89] "kube-scheduler-test-preload-109333" [140a732f-811b-42e0-9b06-ca310cf11882] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1206 09:30:30.311200   36573 system_pods.go:89] "storage-provisioner" [0baac458-b032-476e-9a0c-a3883e13328d] Running
	I1206 09:30:30.311206   36573 system_pods.go:126] duration metric: took 6.466241ms to wait for k8s-apps to be running ...
	I1206 09:30:30.311213   36573 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 09:30:30.311268   36573 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:30:30.348103   36573 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:30:30.355838   36573 system_svc.go:56] duration metric: took 44.614822ms WaitForService to wait for kubelet
	I1206 09:30:30.355875   36573 kubeadm.go:587] duration metric: took 349.827842ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 09:30:30.355899   36573 node_conditions.go:102] verifying NodePressure condition ...
	I1206 09:30:30.363350   36573 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1206 09:30:30.363380   36573 node_conditions.go:123] node cpu capacity is 2
	I1206 09:30:30.363393   36573 node_conditions.go:105] duration metric: took 7.488731ms to run NodePressure ...
	I1206 09:30:30.363408   36573 start.go:242] waiting for startup goroutines ...
	I1206 09:30:30.366734   36573 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 09:30:31.139490   36573 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1206 09:30:31.140824   36573 addons.go:530] duration metric: took 1.13474057s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1206 09:30:31.140860   36573 start.go:247] waiting for cluster config update ...
	I1206 09:30:31.140882   36573 start.go:256] writing updated cluster config ...
	I1206 09:30:31.141203   36573 ssh_runner.go:195] Run: rm -f paused
	I1206 09:30:31.146808   36573 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 09:30:31.147316   36573 kapi.go:59] client config for test-preload-109333: &rest.Config{Host:"https://192.168.39.120:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22049-5603/.minikube/profiles/test-preload-109333/client.crt", KeyFile:"/home/jenkins/minikube-integration/22049-5603/.minikube/profiles/test-preload-109333/client.key", CAFile:"/home/jenkins/minikube-integration/22049-5603/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28156e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 09:30:31.150603   36573 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gb46m" in "kube-system" namespace to be "Ready" or be gone ...
	W1206 09:30:33.157799   36573 pod_ready.go:104] pod "coredns-66bc5c9577-gb46m" is not "Ready", error: <nil>
	I1206 09:30:34.155999   36573 pod_ready.go:94] pod "coredns-66bc5c9577-gb46m" is "Ready"
	I1206 09:30:34.156032   36573 pod_ready.go:86] duration metric: took 3.005409032s for pod "coredns-66bc5c9577-gb46m" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:30:34.159235   36573 pod_ready.go:83] waiting for pod "etcd-test-preload-109333" in "kube-system" namespace to be "Ready" or be gone ...
	W1206 09:30:36.164781   36573 pod_ready.go:104] pod "etcd-test-preload-109333" is not "Ready", error: <nil>
	W1206 09:30:38.164833   36573 pod_ready.go:104] pod "etcd-test-preload-109333" is not "Ready", error: <nil>
	W1206 09:30:40.165417   36573 pod_ready.go:104] pod "etcd-test-preload-109333" is not "Ready", error: <nil>
	W1206 09:30:42.165702   36573 pod_ready.go:104] pod "etcd-test-preload-109333" is not "Ready", error: <nil>
	I1206 09:30:43.165546   36573 pod_ready.go:94] pod "etcd-test-preload-109333" is "Ready"
	I1206 09:30:43.165568   36573 pod_ready.go:86] duration metric: took 9.006313111s for pod "etcd-test-preload-109333" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:30:43.167924   36573 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-109333" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:30:43.173168   36573 pod_ready.go:94] pod "kube-apiserver-test-preload-109333" is "Ready"
	I1206 09:30:43.173195   36573 pod_ready.go:86] duration metric: took 5.256159ms for pod "kube-apiserver-test-preload-109333" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:30:43.176335   36573 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-109333" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:30:43.181622   36573 pod_ready.go:94] pod "kube-controller-manager-test-preload-109333" is "Ready"
	I1206 09:30:43.181638   36573 pod_ready.go:86] duration metric: took 5.285376ms for pod "kube-controller-manager-test-preload-109333" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:30:43.183501   36573 pod_ready.go:83] waiting for pod "kube-proxy-z29gd" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:30:43.363787   36573 pod_ready.go:94] pod "kube-proxy-z29gd" is "Ready"
	I1206 09:30:43.363810   36573 pod_ready.go:86] duration metric: took 180.291051ms for pod "kube-proxy-z29gd" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:30:43.567507   36573 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-109333" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:30:43.964693   36573 pod_ready.go:94] pod "kube-scheduler-test-preload-109333" is "Ready"
	I1206 09:30:43.964716   36573 pod_ready.go:86] duration metric: took 397.187583ms for pod "kube-scheduler-test-preload-109333" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:30:43.964728   36573 pod_ready.go:40] duration metric: took 12.817896191s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 09:30:44.006820   36573 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1206 09:30:44.008543   36573 out.go:179] * Done! kubectl is now configured to use "test-preload-109333" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 06 09:30:44 test-preload-109333 crio[843]: time="2025-12-06 09:30:44.772898389Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765013444772872083,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:132143,},InodesUsed:&UInt64Value{Value:55,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=831a9b2f-d913-4527-8d7e-b5bdab60aa45 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 09:30:44 test-preload-109333 crio[843]: time="2025-12-06 09:30:44.775092142Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5fdca587-b163-4250-b86c-bcc98a38f6a1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 09:30:44 test-preload-109333 crio[843]: time="2025-12-06 09:30:44.775171382Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5fdca587-b163-4250-b86c-bcc98a38f6a1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 09:30:44 test-preload-109333 crio[843]: time="2025-12-06 09:30:44.775359275Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d8ebf1642e5db8d22563133fd565e2b3cc42b4eb4a77d8e74e52731066715cbf,PodSandboxId:cd0b875d5c82f1ff9970b514ad4ef02aa3a6d6ed1a970ac202e7e8f6024bbbb8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765013432478545336,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gb46m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af079579-359c-4a43-841d-0abf03d70d5a,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:624b5447afb0a77cfd780b83b0b634b7ed80661798988a57bfc2a9b31ce4e74e,PodSandboxId:93c6e66623403d293ca62d389b7962e0e90cba2151ab93b372f6ee1ec977f86a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765013428931502298,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z29gd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a27e049-4998-470c-8e52-379de1e010d2,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0be58fd2830673c28b733ee3486f97b43624e9399f1b6b3ade3381a641791a8,PodSandboxId:70a752c80cae3badcd275a5bf82c3543b4928694040bba2a337e8aaf068863bf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765013428907398176,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0baac458-b032-476e-9a0c-a3883e13328d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2586c2728e394c978f7a26948701d9d9c7db21f3a84c0eab7889481e142e0628,PodSandboxId:0c80f7d6ab8861c760fe20a1e7082f3bc7efd2a77ab5642a57ea94568b76b659,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765013425351020949,Labels:map[string]string{io.kubernetes.container.name:
kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-109333,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d95d02d30a4d580e1a9bc64cf3e47bf,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76fc8ef690d4f6fc5b947dcd1058197a02e81c96cc74179564ef4b22974380a3,PodSandboxId:87207a041e83f34f0deea4192d85f8ac7805cac72df31c14211d8410e1230bf0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ff
b85,State:CONTAINER_RUNNING,CreatedAt:1765013425344246035,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-109333,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9748a492c781240102303e31b478f23f,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a4d2718a8c9f24401b8980f884cc77a732b9a1520e9b63193d1e4659bd3f224,PodSandboxId:8be4fde97c018c932decfdf6c0a493f14868687187feb76afdbd415cf1cc4827,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpec
ifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765013425308701120,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-109333,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f8c48f05f0ef0f67e1f02248065e6c3,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba943b1d3147bc4a0072a8ce2854af044302fa28ff35b713d3c14135c8ef2423,PodSandboxId:0fea6d32b9ffcd48a7916a2fe4cb0b1569985dfcadafa17b67a788d0526af5ca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8ba
cf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765013425299606336,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-109333,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4c70d160b0925ae48e3694b64a642fd,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5fdca587-b163-4250-b86c-bcc98a38f6a1 name=/runtime.v1.RuntimeServic
e/ListContainers
	Dec 06 09:30:44 test-preload-109333 crio[843]: time="2025-12-06 09:30:44.815119978Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6cd70aa3-e0d5-460a-b2d8-9dcd7246d703 name=/runtime.v1.RuntimeService/Version
	Dec 06 09:30:44 test-preload-109333 crio[843]: time="2025-12-06 09:30:44.815196441Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6cd70aa3-e0d5-460a-b2d8-9dcd7246d703 name=/runtime.v1.RuntimeService/Version
	Dec 06 09:30:44 test-preload-109333 crio[843]: time="2025-12-06 09:30:44.816930426Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8ded4e50-dc07-4d7a-91de-7545e531541c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 09:30:44 test-preload-109333 crio[843]: time="2025-12-06 09:30:44.817389723Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765013444817368150,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:132143,},InodesUsed:&UInt64Value{Value:55,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8ded4e50-dc07-4d7a-91de-7545e531541c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 09:30:44 test-preload-109333 crio[843]: time="2025-12-06 09:30:44.818171200Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5326ef89-3d62-45cb-b103-7d0fa1763fd1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 09:30:44 test-preload-109333 crio[843]: time="2025-12-06 09:30:44.818580728Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5326ef89-3d62-45cb-b103-7d0fa1763fd1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 09:30:44 test-preload-109333 crio[843]: time="2025-12-06 09:30:44.818884775Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d8ebf1642e5db8d22563133fd565e2b3cc42b4eb4a77d8e74e52731066715cbf,PodSandboxId:cd0b875d5c82f1ff9970b514ad4ef02aa3a6d6ed1a970ac202e7e8f6024bbbb8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765013432478545336,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gb46m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af079579-359c-4a43-841d-0abf03d70d5a,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:624b5447afb0a77cfd780b83b0b634b7ed80661798988a57bfc2a9b31ce4e74e,PodSandboxId:93c6e66623403d293ca62d389b7962e0e90cba2151ab93b372f6ee1ec977f86a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765013428931502298,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z29gd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a27e049-4998-470c-8e52-379de1e010d2,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0be58fd2830673c28b733ee3486f97b43624e9399f1b6b3ade3381a641791a8,PodSandboxId:70a752c80cae3badcd275a5bf82c3543b4928694040bba2a337e8aaf068863bf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765013428907398176,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0baac458-b032-476e-9a0c-a3883e13328d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2586c2728e394c978f7a26948701d9d9c7db21f3a84c0eab7889481e142e0628,PodSandboxId:0c80f7d6ab8861c760fe20a1e7082f3bc7efd2a77ab5642a57ea94568b76b659,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765013425351020949,Labels:map[string]string{io.kubernetes.container.name:
kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-109333,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d95d02d30a4d580e1a9bc64cf3e47bf,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76fc8ef690d4f6fc5b947dcd1058197a02e81c96cc74179564ef4b22974380a3,PodSandboxId:87207a041e83f34f0deea4192d85f8ac7805cac72df31c14211d8410e1230bf0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ff
b85,State:CONTAINER_RUNNING,CreatedAt:1765013425344246035,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-109333,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9748a492c781240102303e31b478f23f,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a4d2718a8c9f24401b8980f884cc77a732b9a1520e9b63193d1e4659bd3f224,PodSandboxId:8be4fde97c018c932decfdf6c0a493f14868687187feb76afdbd415cf1cc4827,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpec
ifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765013425308701120,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-109333,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f8c48f05f0ef0f67e1f02248065e6c3,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba943b1d3147bc4a0072a8ce2854af044302fa28ff35b713d3c14135c8ef2423,PodSandboxId:0fea6d32b9ffcd48a7916a2fe4cb0b1569985dfcadafa17b67a788d0526af5ca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8ba
cf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765013425299606336,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-109333,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4c70d160b0925ae48e3694b64a642fd,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5326ef89-3d62-45cb-b103-7d0fa1763fd1 name=/runtime.v1.RuntimeServic
e/ListContainers
	Dec 06 09:30:44 test-preload-109333 crio[843]: time="2025-12-06 09:30:44.854784007Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b672ebff-5280-4f76-a377-06d9053ddbb0 name=/runtime.v1.RuntimeService/Version
	Dec 06 09:30:44 test-preload-109333 crio[843]: time="2025-12-06 09:30:44.855257842Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b672ebff-5280-4f76-a377-06d9053ddbb0 name=/runtime.v1.RuntimeService/Version
	Dec 06 09:30:44 test-preload-109333 crio[843]: time="2025-12-06 09:30:44.857364596Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bafee147-c1e3-485f-af09-07f90103bca6 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 09:30:44 test-preload-109333 crio[843]: time="2025-12-06 09:30:44.858025654Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765013444857945773,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:132143,},InodesUsed:&UInt64Value{Value:55,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bafee147-c1e3-485f-af09-07f90103bca6 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 09:30:44 test-preload-109333 crio[843]: time="2025-12-06 09:30:44.859014700Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=32620e08-f647-4c21-a587-e350292f7365 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 09:30:44 test-preload-109333 crio[843]: time="2025-12-06 09:30:44.859084965Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=32620e08-f647-4c21-a587-e350292f7365 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 09:30:44 test-preload-109333 crio[843]: time="2025-12-06 09:30:44.859272068Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d8ebf1642e5db8d22563133fd565e2b3cc42b4eb4a77d8e74e52731066715cbf,PodSandboxId:cd0b875d5c82f1ff9970b514ad4ef02aa3a6d6ed1a970ac202e7e8f6024bbbb8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765013432478545336,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gb46m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af079579-359c-4a43-841d-0abf03d70d5a,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:624b5447afb0a77cfd780b83b0b634b7ed80661798988a57bfc2a9b31ce4e74e,PodSandboxId:93c6e66623403d293ca62d389b7962e0e90cba2151ab93b372f6ee1ec977f86a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765013428931502298,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z29gd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a27e049-4998-470c-8e52-379de1e010d2,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0be58fd2830673c28b733ee3486f97b43624e9399f1b6b3ade3381a641791a8,PodSandboxId:70a752c80cae3badcd275a5bf82c3543b4928694040bba2a337e8aaf068863bf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765013428907398176,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0baac458-b032-476e-9a0c-a3883e13328d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2586c2728e394c978f7a26948701d9d9c7db21f3a84c0eab7889481e142e0628,PodSandboxId:0c80f7d6ab8861c760fe20a1e7082f3bc7efd2a77ab5642a57ea94568b76b659,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765013425351020949,Labels:map[string]string{io.kubernetes.container.name:
kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-109333,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d95d02d30a4d580e1a9bc64cf3e47bf,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76fc8ef690d4f6fc5b947dcd1058197a02e81c96cc74179564ef4b22974380a3,PodSandboxId:87207a041e83f34f0deea4192d85f8ac7805cac72df31c14211d8410e1230bf0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ff
b85,State:CONTAINER_RUNNING,CreatedAt:1765013425344246035,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-109333,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9748a492c781240102303e31b478f23f,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a4d2718a8c9f24401b8980f884cc77a732b9a1520e9b63193d1e4659bd3f224,PodSandboxId:8be4fde97c018c932decfdf6c0a493f14868687187feb76afdbd415cf1cc4827,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpec
ifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765013425308701120,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-109333,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f8c48f05f0ef0f67e1f02248065e6c3,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba943b1d3147bc4a0072a8ce2854af044302fa28ff35b713d3c14135c8ef2423,PodSandboxId:0fea6d32b9ffcd48a7916a2fe4cb0b1569985dfcadafa17b67a788d0526af5ca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8ba
cf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765013425299606336,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-109333,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4c70d160b0925ae48e3694b64a642fd,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=32620e08-f647-4c21-a587-e350292f7365 name=/runtime.v1.RuntimeServic
e/ListContainers
	Dec 06 09:30:44 test-preload-109333 crio[843]: time="2025-12-06 09:30:44.889572776Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=38f074e1-bfd0-42b2-88d4-b2f71fa49cdb name=/runtime.v1.RuntimeService/Version
	Dec 06 09:30:44 test-preload-109333 crio[843]: time="2025-12-06 09:30:44.889773188Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=38f074e1-bfd0-42b2-88d4-b2f71fa49cdb name=/runtime.v1.RuntimeService/Version
	Dec 06 09:30:44 test-preload-109333 crio[843]: time="2025-12-06 09:30:44.891385644Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ccbecde9-990a-4ff8-a1f3-896b4226e177 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 09:30:44 test-preload-109333 crio[843]: time="2025-12-06 09:30:44.891769189Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765013444891747257,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:132143,},InodesUsed:&UInt64Value{Value:55,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ccbecde9-990a-4ff8-a1f3-896b4226e177 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 09:30:44 test-preload-109333 crio[843]: time="2025-12-06 09:30:44.892503861Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=036d05b8-0f04-4834-9c90-87c71a23b89f name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 09:30:44 test-preload-109333 crio[843]: time="2025-12-06 09:30:44.892553325Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=036d05b8-0f04-4834-9c90-87c71a23b89f name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 09:30:44 test-preload-109333 crio[843]: time="2025-12-06 09:30:44.892700054Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d8ebf1642e5db8d22563133fd565e2b3cc42b4eb4a77d8e74e52731066715cbf,PodSandboxId:cd0b875d5c82f1ff9970b514ad4ef02aa3a6d6ed1a970ac202e7e8f6024bbbb8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765013432478545336,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gb46m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af079579-359c-4a43-841d-0abf03d70d5a,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:624b5447afb0a77cfd780b83b0b634b7ed80661798988a57bfc2a9b31ce4e74e,PodSandboxId:93c6e66623403d293ca62d389b7962e0e90cba2151ab93b372f6ee1ec977f86a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765013428931502298,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z29gd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a27e049-4998-470c-8e52-379de1e010d2,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0be58fd2830673c28b733ee3486f97b43624e9399f1b6b3ade3381a641791a8,PodSandboxId:70a752c80cae3badcd275a5bf82c3543b4928694040bba2a337e8aaf068863bf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765013428907398176,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0baac458-b032-476e-9a0c-a3883e13328d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2586c2728e394c978f7a26948701d9d9c7db21f3a84c0eab7889481e142e0628,PodSandboxId:0c80f7d6ab8861c760fe20a1e7082f3bc7efd2a77ab5642a57ea94568b76b659,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765013425351020949,Labels:map[string]string{io.kubernetes.container.name:
kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-109333,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d95d02d30a4d580e1a9bc64cf3e47bf,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76fc8ef690d4f6fc5b947dcd1058197a02e81c96cc74179564ef4b22974380a3,PodSandboxId:87207a041e83f34f0deea4192d85f8ac7805cac72df31c14211d8410e1230bf0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ff
b85,State:CONTAINER_RUNNING,CreatedAt:1765013425344246035,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-109333,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9748a492c781240102303e31b478f23f,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a4d2718a8c9f24401b8980f884cc77a732b9a1520e9b63193d1e4659bd3f224,PodSandboxId:8be4fde97c018c932decfdf6c0a493f14868687187feb76afdbd415cf1cc4827,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpec
ifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765013425308701120,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-109333,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f8c48f05f0ef0f67e1f02248065e6c3,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba943b1d3147bc4a0072a8ce2854af044302fa28ff35b713d3c14135c8ef2423,PodSandboxId:0fea6d32b9ffcd48a7916a2fe4cb0b1569985dfcadafa17b67a788d0526af5ca,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8ba
cf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765013425299606336,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-109333,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4c70d160b0925ae48e3694b64a642fd,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=036d05b8-0f04-4834-9c90-87c71a23b89f name=/runtime.v1.RuntimeServic
e/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                           NAMESPACE
	d8ebf1642e5db       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   12 seconds ago      Running             coredns                   1                   cd0b875d5c82f       coredns-66bc5c9577-gb46m                      kube-system
	624b5447afb0a       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   16 seconds ago      Running             kube-proxy                1                   93c6e66623403       kube-proxy-z29gd                              kube-system
	f0be58fd28306       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 seconds ago      Running             storage-provisioner       1                   70a752c80cae3       storage-provisioner                           kube-system
	2586c2728e394       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   19 seconds ago      Running             kube-scheduler            1                   0c80f7d6ab886       kube-scheduler-test-preload-109333            kube-system
	76fc8ef690d4f       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   19 seconds ago      Running             kube-apiserver            1                   87207a041e83f       kube-apiserver-test-preload-109333            kube-system
	0a4d2718a8c9f       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   19 seconds ago      Running             etcd                      1                   8be4fde97c018       etcd-test-preload-109333                      kube-system
	ba943b1d3147b       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   19 seconds ago      Running             kube-controller-manager   1                   0fea6d32b9ffc       kube-controller-manager-test-preload-109333   kube-system
	
	
	==> coredns [d8ebf1642e5db8d22563133fd565e2b3cc42b4eb4a77d8e74e52731066715cbf] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49089 - 19690 "HINFO IN 5746669846823803508.8742579965764915549. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.028342206s
	
	
	==> describe nodes <==
	Name:               test-preload-109333
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-109333
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9c863e42b877bb840aec81dfcdcbf173a0ac5fb9
	                    minikube.k8s.io/name=test-preload-109333
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_06T09_28_56_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 06 Dec 2025 09:28:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-109333
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 06 Dec 2025 09:30:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 06 Dec 2025 09:30:30 +0000   Sat, 06 Dec 2025 09:28:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 06 Dec 2025 09:30:30 +0000   Sat, 06 Dec 2025 09:28:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 06 Dec 2025 09:30:30 +0000   Sat, 06 Dec 2025 09:28:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 06 Dec 2025 09:30:30 +0000   Sat, 06 Dec 2025 09:30:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.120
	  Hostname:    test-preload-109333
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035908Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035908Ki
	  pods:               110
	System Info:
	  Machine ID:                 156490f4486546f98f6dcdd2a335e3ca
	  System UUID:                156490f4-4865-46f9-8f6d-cdd2a335e3ca
	  Boot ID:                    cee6cc18-b89f-41f3-8d09-2c3e95e6ac9d
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-gb46m                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     104s
	  kube-system                 etcd-test-preload-109333                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         109s
	  kube-system                 kube-apiserver-test-preload-109333             250m (12%)    0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-controller-manager-test-preload-109333    200m (10%)    0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-proxy-z29gd                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-scheduler-test-preload-109333             100m (5%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 101s                 kube-proxy       
	  Normal   Starting                 15s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  116s (x8 over 116s)  kubelet          Node test-preload-109333 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    116s (x8 over 116s)  kubelet          Node test-preload-109333 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     116s (x7 over 116s)  kubelet          Node test-preload-109333 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  116s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 110s                 kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     109s                 kubelet          Node test-preload-109333 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  109s                 kubelet          Node test-preload-109333 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    109s                 kubelet          Node test-preload-109333 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  109s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeReady                108s                 kubelet          Node test-preload-109333 status is now: NodeReady
	  Normal   RegisteredNode           105s                 node-controller  Node test-preload-109333 event: Registered Node test-preload-109333 in Controller
	  Normal   Starting                 22s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  22s (x8 over 22s)    kubelet          Node test-preload-109333 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    22s (x8 over 22s)    kubelet          Node test-preload-109333 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     22s (x7 over 22s)    kubelet          Node test-preload-109333 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  22s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 17s                  kubelet          Node test-preload-109333 has been rebooted, boot id: cee6cc18-b89f-41f3-8d09-2c3e95e6ac9d
	  Normal   RegisteredNode           14s                  node-controller  Node test-preload-109333 event: Registered Node test-preload-109333 in Controller
	
	
	==> dmesg <==
	[Dec 6 09:30] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000006] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001780] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.003409] (rpcbind)[120]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.983679] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000018] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.087973] kauditd_printk_skb: 4 callbacks suppressed
	[  +0.101731] kauditd_printk_skb: 102 callbacks suppressed
	[  +5.481319] kauditd_printk_skb: 168 callbacks suppressed
	[  +9.655279] kauditd_printk_skb: 203 callbacks suppressed
	
	
	==> etcd [0a4d2718a8c9f24401b8980f884cc77a732b9a1520e9b63193d1e4659bd3f224] <==
	{"level":"warn","ts":"2025-12-06T09:30:26.884208Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:30:26.929024Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:30:26.945464Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:30:26.953411Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:30:26.991690Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:30:27.034474Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:30:27.046482Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:30:27.062472Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:30:27.086348Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:30:27.114875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:30:27.134027Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:30:27.166930Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:30:27.179030Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:30:27.187738Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:30:27.208171Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:30:27.215326Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:30:27.236889Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:30:27.253266Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:30:27.271042Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:30:27.283588Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:30:27.298661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:30:27.318507Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:30:27.329265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:30:27.344146Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:30:27.447262Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58886","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:30:45 up 0 min,  0 users,  load average: 1.34, 0.37, 0.13
	Linux test-preload-109333 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Dec  4 13:30:13 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [76fc8ef690d4f6fc5b947dcd1058197a02e81c96cc74179564ef4b22974380a3] <==
	I1206 09:30:28.288564       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1206 09:30:28.292342       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1206 09:30:28.292535       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	E1206 09:30:28.293285       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1206 09:30:28.293423       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1206 09:30:28.293508       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1206 09:30:28.295499       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1206 09:30:28.295860       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1206 09:30:28.298138       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1206 09:30:28.298352       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1206 09:30:28.302756       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1206 09:30:28.320999       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1206 09:30:28.321101       1 policy_source.go:240] refreshing policies
	I1206 09:30:28.329003       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1206 09:30:28.339854       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1206 09:30:28.342165       1 cache.go:39] Caches are synced for autoregister controller
	I1206 09:30:28.538104       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1206 09:30:29.089428       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1206 09:30:29.790462       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1206 09:30:29.844770       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1206 09:30:29.877414       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1206 09:30:29.889859       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1206 09:30:31.679850       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1206 09:30:31.731099       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1206 09:30:32.029392       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [ba943b1d3147bc4a0072a8ce2854af044302fa28ff35b713d3c14135c8ef2423] <==
	I1206 09:30:31.640501       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1206 09:30:31.640784       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1206 09:30:31.640940       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1206 09:30:31.645283       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1206 09:30:31.646555       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1206 09:30:31.649037       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1206 09:30:31.651355       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1206 09:30:31.655722       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1206 09:30:31.664047       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1206 09:30:31.665231       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1206 09:30:31.671725       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1206 09:30:31.675060       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1206 09:30:31.675175       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1206 09:30:31.675278       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1206 09:30:31.675363       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="test-preload-109333"
	I1206 09:30:31.675410       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1206 09:30:31.675652       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1206 09:30:31.676549       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1206 09:30:31.676569       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1206 09:30:31.676638       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1206 09:30:31.676656       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1206 09:30:31.676684       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1206 09:30:31.678127       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1206 09:30:31.678131       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1206 09:30:31.678172       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	
	
	==> kube-proxy [624b5447afb0a77cfd780b83b0b634b7ed80661798988a57bfc2a9b31ce4e74e] <==
	I1206 09:30:29.261459       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1206 09:30:29.362239       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1206 09:30:29.362296       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.120"]
	E1206 09:30:29.362380       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 09:30:29.422289       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1206 09:30:29.422420       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1206 09:30:29.422468       1 server_linux.go:132] "Using iptables Proxier"
	I1206 09:30:29.433743       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 09:30:29.434155       1 server.go:527] "Version info" version="v1.34.2"
	I1206 09:30:29.434209       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:30:29.438489       1 config.go:200] "Starting service config controller"
	I1206 09:30:29.443217       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 09:30:29.442179       1 config.go:106] "Starting endpoint slice config controller"
	I1206 09:30:29.443299       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 09:30:29.442202       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 09:30:29.443328       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 09:30:29.447206       1 config.go:309] "Starting node config controller"
	I1206 09:30:29.447292       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 09:30:29.447317       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 09:30:29.544459       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1206 09:30:29.544627       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1206 09:30:29.544363       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [2586c2728e394c978f7a26948701d9d9c7db21f3a84c0eab7889481e142e0628] <==
	I1206 09:30:27.739489       1 serving.go:386] Generated self-signed cert in-memory
	I1206 09:30:28.433449       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1206 09:30:28.433542       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:30:28.439554       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1206 09:30:28.439655       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 09:30:28.439896       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 09:30:28.439637       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1206 09:30:28.439931       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1206 09:30:28.439664       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1206 09:30:28.440006       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1206 09:30:28.439676       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1206 09:30:28.540241       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1206 09:30:28.540315       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 09:30:28.540337       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Dec 06 09:30:28 test-preload-109333 kubelet[1198]: E1206 09:30:28.388569    1198 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-test-preload-109333\" already exists" pod="kube-system/kube-scheduler-test-preload-109333"
	Dec 06 09:30:28 test-preload-109333 kubelet[1198]: I1206 09:30:28.460538    1198 apiserver.go:52] "Watching apiserver"
	Dec 06 09:30:28 test-preload-109333 kubelet[1198]: E1206 09:30:28.465351    1198 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-66bc5c9577-gb46m" podUID="af079579-359c-4a43-841d-0abf03d70d5a"
	Dec 06 09:30:28 test-preload-109333 kubelet[1198]: I1206 09:30:28.492215    1198 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 06 09:30:28 test-preload-109333 kubelet[1198]: I1206 09:30:28.527544    1198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9a27e049-4998-470c-8e52-379de1e010d2-xtables-lock\") pod \"kube-proxy-z29gd\" (UID: \"9a27e049-4998-470c-8e52-379de1e010d2\") " pod="kube-system/kube-proxy-z29gd"
	Dec 06 09:30:28 test-preload-109333 kubelet[1198]: I1206 09:30:28.527614    1198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/0baac458-b032-476e-9a0c-a3883e13328d-tmp\") pod \"storage-provisioner\" (UID: \"0baac458-b032-476e-9a0c-a3883e13328d\") " pod="kube-system/storage-provisioner"
	Dec 06 09:30:28 test-preload-109333 kubelet[1198]: I1206 09:30:28.527701    1198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9a27e049-4998-470c-8e52-379de1e010d2-lib-modules\") pod \"kube-proxy-z29gd\" (UID: \"9a27e049-4998-470c-8e52-379de1e010d2\") " pod="kube-system/kube-proxy-z29gd"
	Dec 06 09:30:28 test-preload-109333 kubelet[1198]: E1206 09:30:28.528243    1198 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 06 09:30:28 test-preload-109333 kubelet[1198]: E1206 09:30:28.528306    1198 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/af079579-359c-4a43-841d-0abf03d70d5a-config-volume podName:af079579-359c-4a43-841d-0abf03d70d5a nodeName:}" failed. No retries permitted until 2025-12-06 09:30:29.028289407 +0000 UTC m=+5.682293738 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/af079579-359c-4a43-841d-0abf03d70d5a-config-volume") pod "coredns-66bc5c9577-gb46m" (UID: "af079579-359c-4a43-841d-0abf03d70d5a") : object "kube-system"/"coredns" not registered
	Dec 06 09:30:28 test-preload-109333 kubelet[1198]: I1206 09:30:28.667898    1198 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-test-preload-109333"
	Dec 06 09:30:28 test-preload-109333 kubelet[1198]: I1206 09:30:28.669098    1198 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/etcd-test-preload-109333"
	Dec 06 09:30:28 test-preload-109333 kubelet[1198]: I1206 09:30:28.669400    1198 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-test-preload-109333"
	Dec 06 09:30:28 test-preload-109333 kubelet[1198]: E1206 09:30:28.679833    1198 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-test-preload-109333\" already exists" pod="kube-system/kube-scheduler-test-preload-109333"
	Dec 06 09:30:28 test-preload-109333 kubelet[1198]: E1206 09:30:28.680816    1198 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"etcd-test-preload-109333\" already exists" pod="kube-system/etcd-test-preload-109333"
	Dec 06 09:30:28 test-preload-109333 kubelet[1198]: E1206 09:30:28.681798    1198 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-test-preload-109333\" already exists" pod="kube-system/kube-apiserver-test-preload-109333"
	Dec 06 09:30:29 test-preload-109333 kubelet[1198]: E1206 09:30:29.031463    1198 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 06 09:30:29 test-preload-109333 kubelet[1198]: E1206 09:30:29.031550    1198 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/af079579-359c-4a43-841d-0abf03d70d5a-config-volume podName:af079579-359c-4a43-841d-0abf03d70d5a nodeName:}" failed. No retries permitted until 2025-12-06 09:30:30.031536104 +0000 UTC m=+6.685540423 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/af079579-359c-4a43-841d-0abf03d70d5a-config-volume") pod "coredns-66bc5c9577-gb46m" (UID: "af079579-359c-4a43-841d-0abf03d70d5a") : object "kube-system"/"coredns" not registered
	Dec 06 09:30:29 test-preload-109333 kubelet[1198]: E1206 09:30:29.538406    1198 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-66bc5c9577-gb46m" podUID="af079579-359c-4a43-841d-0abf03d70d5a"
	Dec 06 09:30:30 test-preload-109333 kubelet[1198]: E1206 09:30:30.043939    1198 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 06 09:30:30 test-preload-109333 kubelet[1198]: E1206 09:30:30.044237    1198 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/af079579-359c-4a43-841d-0abf03d70d5a-config-volume podName:af079579-359c-4a43-841d-0abf03d70d5a nodeName:}" failed. No retries permitted until 2025-12-06 09:30:32.04421977 +0000 UTC m=+8.698224093 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/af079579-359c-4a43-841d-0abf03d70d5a-config-volume") pod "coredns-66bc5c9577-gb46m" (UID: "af079579-359c-4a43-841d-0abf03d70d5a") : object "kube-system"/"coredns" not registered
	Dec 06 09:30:30 test-preload-109333 kubelet[1198]: I1206 09:30:30.066395    1198 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 06 09:30:33 test-preload-109333 kubelet[1198]: E1206 09:30:33.543305    1198 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765013433541291253 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:132143} inodes_used:{value:55}}"
	Dec 06 09:30:33 test-preload-109333 kubelet[1198]: E1206 09:30:33.543354    1198 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765013433541291253 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:132143} inodes_used:{value:55}}"
	Dec 06 09:30:43 test-preload-109333 kubelet[1198]: E1206 09:30:43.545474    1198 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765013443545117317 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:132143} inodes_used:{value:55}}"
	Dec 06 09:30:43 test-preload-109333 kubelet[1198]: E1206 09:30:43.545500    1198 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765013443545117317 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:132143} inodes_used:{value:55}}"
	
	
	==> storage-provisioner [f0be58fd2830673c28b733ee3486f97b43624e9399f1b6b3ade3381a641791a8] <==
	I1206 09:30:29.018189       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-109333 -n test-preload-109333
helpers_test.go:269: (dbg) Run:  kubectl --context test-preload-109333 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-109333" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-109333
--- FAIL: TestPreload (159.79s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (67.43s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-272844 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-272844 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m4.006127745s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-272844] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22049
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22049-5603/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-5603/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-272844" primary control-plane node in "pause-272844" cluster
	* Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-272844" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 09:34:18.852594   39138 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:34:18.852898   39138 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:34:18.852912   39138 out.go:374] Setting ErrFile to fd 2...
	I1206 09:34:18.852918   39138 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:34:18.853167   39138 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5603/.minikube/bin
	I1206 09:34:18.853770   39138 out.go:368] Setting JSON to false
	I1206 09:34:18.855051   39138 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":4601,"bootTime":1765009058,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:34:18.855120   39138 start.go:143] virtualization: kvm guest
	I1206 09:34:18.857151   39138 out.go:179] * [pause-272844] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 09:34:18.858694   39138 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 09:34:18.858734   39138 notify.go:221] Checking for updates...
	I1206 09:34:18.861040   39138 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:34:18.862136   39138 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-5603/kubeconfig
	I1206 09:34:18.863369   39138 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-5603/.minikube
	I1206 09:34:18.864595   39138 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:34:18.869160   39138 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:34:18.871077   39138 config.go:182] Loaded profile config "pause-272844": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:34:18.871769   39138 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:34:18.912083   39138 out.go:179] * Using the kvm2 driver based on existing profile
	I1206 09:34:18.913212   39138 start.go:309] selected driver: kvm2
	I1206 09:34:18.913231   39138 start.go:927] validating driver "kvm2" against &{Name:pause-272844 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.34.2 ClusterName:pause-272844 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.157 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-insta
ller:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:34:18.913421   39138 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:34:18.914744   39138 cni.go:84] Creating CNI manager for ""
	I1206 09:34:18.914819   39138 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 09:34:18.914878   39138 start.go:353] cluster config:
	{Name:pause-272844 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-272844 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.157 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false
portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:34:18.915010   39138 iso.go:125] acquiring lock: {Name:mk30cf35cfaf5c28a2b5f78c7b431de5eb8c8e82 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:34:18.917327   39138 out.go:179] * Starting "pause-272844" primary control-plane node in "pause-272844" cluster
	I1206 09:34:18.918446   39138 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 09:34:18.918509   39138 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22049-5603/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1206 09:34:18.918541   39138 cache.go:65] Caching tarball of preloaded images
	I1206 09:34:18.918647   39138 preload.go:238] Found /home/jenkins/minikube-integration/22049-5603/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1206 09:34:18.918662   39138 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1206 09:34:18.918797   39138 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/pause-272844/config.json ...
	I1206 09:34:18.919049   39138 start.go:360] acquireMachinesLock for pause-272844: {Name:mk3342af5720fb96b5115fa945410cab4f7bd1fb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1206 09:34:31.053624   39138 start.go:364] duration metric: took 12.134515681s to acquireMachinesLock for "pause-272844"
	I1206 09:34:31.053674   39138 start.go:96] Skipping create...Using existing machine configuration
	I1206 09:34:31.053729   39138 fix.go:54] fixHost starting: 
	I1206 09:34:31.056108   39138 fix.go:112] recreateIfNeeded on pause-272844: state=Running err=<nil>
	W1206 09:34:31.056149   39138 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 09:34:31.058887   39138 out.go:252] * Updating the running kvm2 "pause-272844" VM ...
	I1206 09:34:31.058916   39138 machine.go:94] provisionDockerMachine start ...
	I1206 09:34:31.062717   39138 main.go:143] libmachine: domain pause-272844 has defined MAC address 52:54:00:da:ee:f5 in network mk-pause-272844
	I1206 09:34:31.063157   39138 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:da:ee:f5", ip: ""} in network mk-pause-272844: {Iface:virbr2 ExpiryTime:2025-12-06 10:33:13 +0000 UTC Type:0 Mac:52:54:00:da:ee:f5 Iaid: IPaddr:192.168.50.157 Prefix:24 Hostname:pause-272844 Clientid:01:52:54:00:da:ee:f5}
	I1206 09:34:31.063191   39138 main.go:143] libmachine: domain pause-272844 has defined IP address 192.168.50.157 and MAC address 52:54:00:da:ee:f5 in network mk-pause-272844
	I1206 09:34:31.063357   39138 main.go:143] libmachine: Using SSH client type: native
	I1206 09:34:31.063617   39138 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.50.157 22 <nil> <nil>}
	I1206 09:34:31.063633   39138 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 09:34:31.176431   39138 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-272844
	
	I1206 09:34:31.176462   39138 buildroot.go:166] provisioning hostname "pause-272844"
	I1206 09:34:31.179911   39138 main.go:143] libmachine: domain pause-272844 has defined MAC address 52:54:00:da:ee:f5 in network mk-pause-272844
	I1206 09:34:31.180505   39138 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:da:ee:f5", ip: ""} in network mk-pause-272844: {Iface:virbr2 ExpiryTime:2025-12-06 10:33:13 +0000 UTC Type:0 Mac:52:54:00:da:ee:f5 Iaid: IPaddr:192.168.50.157 Prefix:24 Hostname:pause-272844 Clientid:01:52:54:00:da:ee:f5}
	I1206 09:34:31.180549   39138 main.go:143] libmachine: domain pause-272844 has defined IP address 192.168.50.157 and MAC address 52:54:00:da:ee:f5 in network mk-pause-272844
	I1206 09:34:31.180770   39138 main.go:143] libmachine: Using SSH client type: native
	I1206 09:34:31.181091   39138 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.50.157 22 <nil> <nil>}
	I1206 09:34:31.181111   39138 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-272844 && echo "pause-272844" | sudo tee /etc/hostname
	I1206 09:34:31.308001   39138 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-272844
	
	I1206 09:34:31.311115   39138 main.go:143] libmachine: domain pause-272844 has defined MAC address 52:54:00:da:ee:f5 in network mk-pause-272844
	I1206 09:34:31.311489   39138 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:da:ee:f5", ip: ""} in network mk-pause-272844: {Iface:virbr2 ExpiryTime:2025-12-06 10:33:13 +0000 UTC Type:0 Mac:52:54:00:da:ee:f5 Iaid: IPaddr:192.168.50.157 Prefix:24 Hostname:pause-272844 Clientid:01:52:54:00:da:ee:f5}
	I1206 09:34:31.311522   39138 main.go:143] libmachine: domain pause-272844 has defined IP address 192.168.50.157 and MAC address 52:54:00:da:ee:f5 in network mk-pause-272844
	I1206 09:34:31.311715   39138 main.go:143] libmachine: Using SSH client type: native
	I1206 09:34:31.311950   39138 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.50.157 22 <nil> <nil>}
	I1206 09:34:31.311966   39138 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-272844' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-272844/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-272844' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 09:34:31.430421   39138 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 09:34:31.430449   39138 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22049-5603/.minikube CaCertPath:/home/jenkins/minikube-integration/22049-5603/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22049-5603/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22049-5603/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22049-5603/.minikube}
	I1206 09:34:31.430491   39138 buildroot.go:174] setting up certificates
	I1206 09:34:31.430500   39138 provision.go:84] configureAuth start
	I1206 09:34:31.433774   39138 main.go:143] libmachine: domain pause-272844 has defined MAC address 52:54:00:da:ee:f5 in network mk-pause-272844
	I1206 09:34:31.434287   39138 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:da:ee:f5", ip: ""} in network mk-pause-272844: {Iface:virbr2 ExpiryTime:2025-12-06 10:33:13 +0000 UTC Type:0 Mac:52:54:00:da:ee:f5 Iaid: IPaddr:192.168.50.157 Prefix:24 Hostname:pause-272844 Clientid:01:52:54:00:da:ee:f5}
	I1206 09:34:31.434325   39138 main.go:143] libmachine: domain pause-272844 has defined IP address 192.168.50.157 and MAC address 52:54:00:da:ee:f5 in network mk-pause-272844
	I1206 09:34:31.437592   39138 main.go:143] libmachine: domain pause-272844 has defined MAC address 52:54:00:da:ee:f5 in network mk-pause-272844
	I1206 09:34:31.438081   39138 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:da:ee:f5", ip: ""} in network mk-pause-272844: {Iface:virbr2 ExpiryTime:2025-12-06 10:33:13 +0000 UTC Type:0 Mac:52:54:00:da:ee:f5 Iaid: IPaddr:192.168.50.157 Prefix:24 Hostname:pause-272844 Clientid:01:52:54:00:da:ee:f5}
	I1206 09:34:31.438117   39138 main.go:143] libmachine: domain pause-272844 has defined IP address 192.168.50.157 and MAC address 52:54:00:da:ee:f5 in network mk-pause-272844
	I1206 09:34:31.438290   39138 provision.go:143] copyHostCerts
	I1206 09:34:31.438353   39138 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-5603/.minikube/ca.pem, removing ...
	I1206 09:34:31.438379   39138 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-5603/.minikube/ca.pem
	I1206 09:34:31.438453   39138 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-5603/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22049-5603/.minikube/ca.pem (1082 bytes)
	I1206 09:34:31.438616   39138 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-5603/.minikube/cert.pem, removing ...
	I1206 09:34:31.438631   39138 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-5603/.minikube/cert.pem
	I1206 09:34:31.438683   39138 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-5603/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22049-5603/.minikube/cert.pem (1123 bytes)
	I1206 09:34:31.438778   39138 exec_runner.go:144] found /home/jenkins/minikube-integration/22049-5603/.minikube/key.pem, removing ...
	I1206 09:34:31.438792   39138 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22049-5603/.minikube/key.pem
	I1206 09:34:31.438829   39138 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22049-5603/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22049-5603/.minikube/key.pem (1675 bytes)
	I1206 09:34:31.438924   39138 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22049-5603/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22049-5603/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22049-5603/.minikube/certs/ca-key.pem org=jenkins.pause-272844 san=[127.0.0.1 192.168.50.157 localhost minikube pause-272844]
	I1206 09:34:31.569134   39138 provision.go:177] copyRemoteCerts
	I1206 09:34:31.569192   39138 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 09:34:31.572584   39138 main.go:143] libmachine: domain pause-272844 has defined MAC address 52:54:00:da:ee:f5 in network mk-pause-272844
	I1206 09:34:31.573078   39138 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:da:ee:f5", ip: ""} in network mk-pause-272844: {Iface:virbr2 ExpiryTime:2025-12-06 10:33:13 +0000 UTC Type:0 Mac:52:54:00:da:ee:f5 Iaid: IPaddr:192.168.50.157 Prefix:24 Hostname:pause-272844 Clientid:01:52:54:00:da:ee:f5}
	I1206 09:34:31.573113   39138 main.go:143] libmachine: domain pause-272844 has defined IP address 192.168.50.157 and MAC address 52:54:00:da:ee:f5 in network mk-pause-272844
	I1206 09:34:31.573315   39138 sshutil.go:53] new ssh client: &{IP:192.168.50.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/machines/pause-272844/id_rsa Username:docker}
	I1206 09:34:31.665730   39138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5603/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1206 09:34:31.703204   39138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5603/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 09:34:31.740512   39138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5603/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 09:34:31.780063   39138 provision.go:87] duration metric: took 349.548568ms to configureAuth
	I1206 09:34:31.780097   39138 buildroot.go:189] setting minikube options for container-runtime
	I1206 09:34:31.780433   39138 config.go:182] Loaded profile config "pause-272844": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:34:31.783811   39138 main.go:143] libmachine: domain pause-272844 has defined MAC address 52:54:00:da:ee:f5 in network mk-pause-272844
	I1206 09:34:31.784276   39138 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:da:ee:f5", ip: ""} in network mk-pause-272844: {Iface:virbr2 ExpiryTime:2025-12-06 10:33:13 +0000 UTC Type:0 Mac:52:54:00:da:ee:f5 Iaid: IPaddr:192.168.50.157 Prefix:24 Hostname:pause-272844 Clientid:01:52:54:00:da:ee:f5}
	I1206 09:34:31.784301   39138 main.go:143] libmachine: domain pause-272844 has defined IP address 192.168.50.157 and MAC address 52:54:00:da:ee:f5 in network mk-pause-272844
	I1206 09:34:31.784448   39138 main.go:143] libmachine: Using SSH client type: native
	I1206 09:34:31.784718   39138 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.50.157 22 <nil> <nil>}
	I1206 09:34:31.784735   39138 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 09:34:37.377060   39138 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 09:34:37.377094   39138 machine.go:97] duration metric: took 6.318163361s to provisionDockerMachine
	I1206 09:34:37.377110   39138 start.go:293] postStartSetup for "pause-272844" (driver="kvm2")
	I1206 09:34:37.377123   39138 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 09:34:37.377198   39138 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 09:34:37.380921   39138 main.go:143] libmachine: domain pause-272844 has defined MAC address 52:54:00:da:ee:f5 in network mk-pause-272844
	I1206 09:34:37.381387   39138 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:da:ee:f5", ip: ""} in network mk-pause-272844: {Iface:virbr2 ExpiryTime:2025-12-06 10:33:13 +0000 UTC Type:0 Mac:52:54:00:da:ee:f5 Iaid: IPaddr:192.168.50.157 Prefix:24 Hostname:pause-272844 Clientid:01:52:54:00:da:ee:f5}
	I1206 09:34:37.381417   39138 main.go:143] libmachine: domain pause-272844 has defined IP address 192.168.50.157 and MAC address 52:54:00:da:ee:f5 in network mk-pause-272844
	I1206 09:34:37.381581   39138 sshutil.go:53] new ssh client: &{IP:192.168.50.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/machines/pause-272844/id_rsa Username:docker}
	I1206 09:34:37.465463   39138 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 09:34:37.473545   39138 info.go:137] Remote host: Buildroot 2025.02
	I1206 09:34:37.473575   39138 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-5603/.minikube/addons for local assets ...
	I1206 09:34:37.473640   39138 filesync.go:126] Scanning /home/jenkins/minikube-integration/22049-5603/.minikube/files for local assets ...
	I1206 09:34:37.473759   39138 filesync.go:149] local asset: /home/jenkins/minikube-integration/22049-5603/.minikube/files/etc/ssl/certs/95522.pem -> 95522.pem in /etc/ssl/certs
	I1206 09:34:37.473895   39138 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 09:34:37.490215   39138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5603/.minikube/files/etc/ssl/certs/95522.pem --> /etc/ssl/certs/95522.pem (1708 bytes)
	I1206 09:34:37.524315   39138 start.go:296] duration metric: took 147.19019ms for postStartSetup
	I1206 09:34:37.524363   39138 fix.go:56] duration metric: took 6.470638584s for fixHost
	I1206 09:34:37.527200   39138 main.go:143] libmachine: domain pause-272844 has defined MAC address 52:54:00:da:ee:f5 in network mk-pause-272844
	I1206 09:34:37.527582   39138 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:da:ee:f5", ip: ""} in network mk-pause-272844: {Iface:virbr2 ExpiryTime:2025-12-06 10:33:13 +0000 UTC Type:0 Mac:52:54:00:da:ee:f5 Iaid: IPaddr:192.168.50.157 Prefix:24 Hostname:pause-272844 Clientid:01:52:54:00:da:ee:f5}
	I1206 09:34:37.527606   39138 main.go:143] libmachine: domain pause-272844 has defined IP address 192.168.50.157 and MAC address 52:54:00:da:ee:f5 in network mk-pause-272844
	I1206 09:34:37.527773   39138 main.go:143] libmachine: Using SSH client type: native
	I1206 09:34:37.528015   39138 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.50.157 22 <nil> <nil>}
	I1206 09:34:37.528027   39138 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1206 09:34:37.633747   39138 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765013677.630704764
	
	I1206 09:34:37.633771   39138 fix.go:216] guest clock: 1765013677.630704764
	I1206 09:34:37.633780   39138 fix.go:229] Guest: 2025-12-06 09:34:37.630704764 +0000 UTC Remote: 2025-12-06 09:34:37.524368356 +0000 UTC m=+18.734367614 (delta=106.336408ms)
	I1206 09:34:37.633800   39138 fix.go:200] guest clock delta is within tolerance: 106.336408ms
	I1206 09:34:37.633807   39138 start.go:83] releasing machines lock for "pause-272844", held for 6.580155338s
	I1206 09:34:37.637220   39138 main.go:143] libmachine: domain pause-272844 has defined MAC address 52:54:00:da:ee:f5 in network mk-pause-272844
	I1206 09:34:37.638034   39138 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:da:ee:f5", ip: ""} in network mk-pause-272844: {Iface:virbr2 ExpiryTime:2025-12-06 10:33:13 +0000 UTC Type:0 Mac:52:54:00:da:ee:f5 Iaid: IPaddr:192.168.50.157 Prefix:24 Hostname:pause-272844 Clientid:01:52:54:00:da:ee:f5}
	I1206 09:34:37.638070   39138 main.go:143] libmachine: domain pause-272844 has defined IP address 192.168.50.157 and MAC address 52:54:00:da:ee:f5 in network mk-pause-272844
	I1206 09:34:37.638703   39138 ssh_runner.go:195] Run: cat /version.json
	I1206 09:34:37.638767   39138 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 09:34:37.642038   39138 main.go:143] libmachine: domain pause-272844 has defined MAC address 52:54:00:da:ee:f5 in network mk-pause-272844
	I1206 09:34:37.642229   39138 main.go:143] libmachine: domain pause-272844 has defined MAC address 52:54:00:da:ee:f5 in network mk-pause-272844
	I1206 09:34:37.642554   39138 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:da:ee:f5", ip: ""} in network mk-pause-272844: {Iface:virbr2 ExpiryTime:2025-12-06 10:33:13 +0000 UTC Type:0 Mac:52:54:00:da:ee:f5 Iaid: IPaddr:192.168.50.157 Prefix:24 Hostname:pause-272844 Clientid:01:52:54:00:da:ee:f5}
	I1206 09:34:37.642595   39138 main.go:143] libmachine: domain pause-272844 has defined IP address 192.168.50.157 and MAC address 52:54:00:da:ee:f5 in network mk-pause-272844
	I1206 09:34:37.642776   39138 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:da:ee:f5", ip: ""} in network mk-pause-272844: {Iface:virbr2 ExpiryTime:2025-12-06 10:33:13 +0000 UTC Type:0 Mac:52:54:00:da:ee:f5 Iaid: IPaddr:192.168.50.157 Prefix:24 Hostname:pause-272844 Clientid:01:52:54:00:da:ee:f5}
	I1206 09:34:37.642807   39138 main.go:143] libmachine: domain pause-272844 has defined IP address 192.168.50.157 and MAC address 52:54:00:da:ee:f5 in network mk-pause-272844
	I1206 09:34:37.642802   39138 sshutil.go:53] new ssh client: &{IP:192.168.50.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/machines/pause-272844/id_rsa Username:docker}
	I1206 09:34:37.643066   39138 sshutil.go:53] new ssh client: &{IP:192.168.50.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/machines/pause-272844/id_rsa Username:docker}
	I1206 09:34:37.748890   39138 ssh_runner.go:195] Run: systemctl --version
	I1206 09:34:37.755834   39138 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 09:34:37.916127   39138 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 09:34:37.926415   39138 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 09:34:37.926496   39138 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 09:34:37.940779   39138 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 09:34:37.940802   39138 start.go:496] detecting cgroup driver to use...
	I1206 09:34:37.940878   39138 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 09:34:37.970705   39138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 09:34:37.993289   39138 docker.go:218] disabling cri-docker service (if available) ...
	I1206 09:34:37.993358   39138 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 09:34:38.016366   39138 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 09:34:38.034595   39138 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 09:34:38.260098   39138 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 09:34:38.472479   39138 docker.go:234] disabling docker service ...
	I1206 09:34:38.472539   39138 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 09:34:38.511192   39138 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 09:34:38.534076   39138 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 09:34:38.734034   39138 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 09:34:38.938551   39138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 09:34:38.961159   39138 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 09:34:38.994810   39138 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 09:34:38.994862   39138 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:34:39.010274   39138 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 09:34:39.010346   39138 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:34:39.026018   39138 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:34:39.042666   39138 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:34:39.057348   39138 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 09:34:39.072980   39138 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:34:39.091693   39138 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:34:39.107977   39138 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:34:39.124671   39138 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 09:34:39.138195   39138 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 09:34:39.154790   39138 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:34:39.373328   39138 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 09:34:46.516607   39138 ssh_runner.go:235] Completed: sudo systemctl restart crio: (7.143245687s)
	I1206 09:34:46.516634   39138 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 09:34:46.516687   39138 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 09:34:46.524305   39138 start.go:564] Will wait 60s for crictl version
	I1206 09:34:46.524359   39138 ssh_runner.go:195] Run: which crictl
	I1206 09:34:46.530105   39138 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1206 09:34:46.572897   39138 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1206 09:34:46.572982   39138 ssh_runner.go:195] Run: crio --version
	I1206 09:34:46.613063   39138 ssh_runner.go:195] Run: crio --version
	I1206 09:34:46.653043   39138 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1206 09:34:46.657874   39138 main.go:143] libmachine: domain pause-272844 has defined MAC address 52:54:00:da:ee:f5 in network mk-pause-272844
	I1206 09:34:46.658452   39138 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:da:ee:f5", ip: ""} in network mk-pause-272844: {Iface:virbr2 ExpiryTime:2025-12-06 10:33:13 +0000 UTC Type:0 Mac:52:54:00:da:ee:f5 Iaid: IPaddr:192.168.50.157 Prefix:24 Hostname:pause-272844 Clientid:01:52:54:00:da:ee:f5}
	I1206 09:34:46.658503   39138 main.go:143] libmachine: domain pause-272844 has defined IP address 192.168.50.157 and MAC address 52:54:00:da:ee:f5 in network mk-pause-272844
	I1206 09:34:46.658691   39138 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1206 09:34:46.664774   39138 kubeadm.go:884] updating cluster {Name:pause-272844 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2
ClusterName:pause-272844 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.157 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvid
ia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 09:34:46.664952   39138 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 09:34:46.665009   39138 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:34:46.721993   39138 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:34:46.722021   39138 crio.go:433] Images already preloaded, skipping extraction
	I1206 09:34:46.722096   39138 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:34:46.764914   39138 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:34:46.764937   39138 cache_images.go:86] Images are preloaded, skipping loading
	I1206 09:34:46.764944   39138 kubeadm.go:935] updating node { 192.168.50.157 8443 v1.34.2 crio true true} ...
	I1206 09:34:46.765062   39138 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-272844 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.157
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-272844 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 09:34:46.765142   39138 ssh_runner.go:195] Run: crio config
	I1206 09:34:46.819222   39138 cni.go:84] Creating CNI manager for ""
	I1206 09:34:46.819249   39138 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 09:34:46.819276   39138 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 09:34:46.819307   39138 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.157 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-272844 NodeName:pause-272844 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.157"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.157 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 09:34:46.819488   39138 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.157
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-272844"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.157"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.157"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 09:34:46.819558   39138 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1206 09:34:46.833078   39138 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 09:34:46.833144   39138 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 09:34:46.845435   39138 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1206 09:34:46.867596   39138 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 09:34:46.888691   39138 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1206 09:34:46.908999   39138 ssh_runner.go:195] Run: grep 192.168.50.157	control-plane.minikube.internal$ /etc/hosts
	I1206 09:34:46.913210   39138 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:34:47.110338   39138 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:34:47.133203   39138 certs.go:69] Setting up /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/pause-272844 for IP: 192.168.50.157
	I1206 09:34:47.133232   39138 certs.go:195] generating shared ca certs ...
	I1206 09:34:47.133260   39138 certs.go:227] acquiring lock for ca certs: {Name:mk000359972764fead2b3aaf8b843862aa35270c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:34:47.133427   39138 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22049-5603/.minikube/ca.key
	I1206 09:34:47.133525   39138 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22049-5603/.minikube/proxy-client-ca.key
	I1206 09:34:47.133546   39138 certs.go:257] generating profile certs ...
	I1206 09:34:47.133678   39138 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/pause-272844/client.key
	I1206 09:34:47.133760   39138 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/pause-272844/apiserver.key.a469c9a8
	I1206 09:34:47.133815   39138 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/pause-272844/proxy-client.key
	I1206 09:34:47.133983   39138 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5603/.minikube/certs/9552.pem (1338 bytes)
	W1206 09:34:47.134028   39138 certs.go:480] ignoring /home/jenkins/minikube-integration/22049-5603/.minikube/certs/9552_empty.pem, impossibly tiny 0 bytes
	I1206 09:34:47.134038   39138 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5603/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 09:34:47.134081   39138 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5603/.minikube/certs/ca.pem (1082 bytes)
	I1206 09:34:47.134115   39138 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5603/.minikube/certs/cert.pem (1123 bytes)
	I1206 09:34:47.134157   39138 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5603/.minikube/certs/key.pem (1675 bytes)
	I1206 09:34:47.134215   39138 certs.go:484] found cert: /home/jenkins/minikube-integration/22049-5603/.minikube/files/etc/ssl/certs/95522.pem (1708 bytes)
	I1206 09:34:47.135094   39138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5603/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 09:34:47.166089   39138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5603/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 09:34:47.199775   39138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5603/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 09:34:47.230000   39138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5603/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 09:34:47.263659   39138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/pause-272844/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1206 09:34:47.302569   39138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/pause-272844/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 09:34:47.335176   39138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/pause-272844/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 09:34:47.365613   39138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/pause-272844/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 09:34:47.398373   39138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5603/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 09:34:47.431384   39138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5603/.minikube/certs/9552.pem --> /usr/share/ca-certificates/9552.pem (1338 bytes)
	I1206 09:34:47.467636   39138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22049-5603/.minikube/files/etc/ssl/certs/95522.pem --> /usr/share/ca-certificates/95522.pem (1708 bytes)
	I1206 09:34:47.514972   39138 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 09:34:47.558162   39138 ssh_runner.go:195] Run: openssl version
	I1206 09:34:47.568865   39138 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:34:47.596240   39138 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 09:34:47.631122   39138 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:34:47.641974   39138 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:34:47.642035   39138 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:34:47.658298   39138 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 09:34:47.740354   39138 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/9552.pem
	I1206 09:34:47.788692   39138 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/9552.pem /etc/ssl/certs/9552.pem
	I1206 09:34:47.826286   39138 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9552.pem
	I1206 09:34:47.842692   39138 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 08:46 /usr/share/ca-certificates/9552.pem
	I1206 09:34:47.842760   39138 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9552.pem
	I1206 09:34:47.860585   39138 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 09:34:47.904219   39138 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/95522.pem
	I1206 09:34:47.941547   39138 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/95522.pem /etc/ssl/certs/95522.pem
	I1206 09:34:47.975129   39138 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/95522.pem
	I1206 09:34:47.994233   39138 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 08:46 /usr/share/ca-certificates/95522.pem
	I1206 09:34:47.994297   39138 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/95522.pem
	I1206 09:34:48.040213   39138 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 09:34:48.079733   39138 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 09:34:48.097067   39138 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 09:34:48.119251   39138 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 09:34:48.134819   39138 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 09:34:48.153151   39138 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 09:34:48.171637   39138 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 09:34:48.193483   39138 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 09:34:48.214658   39138 kubeadm.go:401] StartCluster: {Name:pause-272844 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 Cl
usterName:pause-272844 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.157 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-
gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:34:48.214793   39138 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 09:34:48.214851   39138 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 09:34:48.315919   39138 cri.go:89] found id: "5cb81cd535b67d209a36ac0ef24eec735d25bcb5e170e0383cbe6939107cc90a"
	I1206 09:34:48.315942   39138 cri.go:89] found id: "fcae0ad6f528493f7d768a91eb0b5640ab811469bd4881b8d266de4c65a1530e"
	I1206 09:34:48.315948   39138 cri.go:89] found id: "c0292705c13431f224cd74cc3ab61bab740744bed4818edb49dc87b30a143a39"
	I1206 09:34:48.315953   39138 cri.go:89] found id: "906109c794a06106da1a38cc94530358d3a6b3fa0ebdb8adbb8293f6959a69ab"
	I1206 09:34:48.315959   39138 cri.go:89] found id: "551b029a392fd894a8a8ea7fb350190f4e7a1ae50bbea21f74e1ee9e488fcdf8"
	I1206 09:34:48.315964   39138 cri.go:89] found id: "1762c3811ed0921bcedf9f48e0729e79858afdbc1a0c39a6db287901e95b83f5"
	I1206 09:34:48.315970   39138 cri.go:89] found id: ""
	I1206 09:34:48.316021   39138 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-272844 -n pause-272844
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-272844 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-272844 logs -n 25: (1.196786411s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                            ARGS                                                                             │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-920584 sudo systemctl cat docker --no-pager                                                                                                       │ cilium-920584             │ jenkins │ v1.37.0 │ 06 Dec 25 09:34 UTC │                     │
	│ ssh     │ -p cilium-920584 sudo cat /etc/docker/daemon.json                                                                                                           │ cilium-920584             │ jenkins │ v1.37.0 │ 06 Dec 25 09:34 UTC │                     │
	│ ssh     │ -p cilium-920584 sudo docker system info                                                                                                                    │ cilium-920584             │ jenkins │ v1.37.0 │ 06 Dec 25 09:34 UTC │                     │
	│ ssh     │ -p cilium-920584 sudo systemctl status cri-docker --all --full --no-pager                                                                                   │ cilium-920584             │ jenkins │ v1.37.0 │ 06 Dec 25 09:34 UTC │                     │
	│ ssh     │ -p cilium-920584 sudo systemctl cat cri-docker --no-pager                                                                                                   │ cilium-920584             │ jenkins │ v1.37.0 │ 06 Dec 25 09:34 UTC │                     │
	│ ssh     │ -p cilium-920584 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                              │ cilium-920584             │ jenkins │ v1.37.0 │ 06 Dec 25 09:34 UTC │                     │
	│ ssh     │ -p cilium-920584 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                        │ cilium-920584             │ jenkins │ v1.37.0 │ 06 Dec 25 09:34 UTC │                     │
	│ ssh     │ -p cilium-920584 sudo cri-dockerd --version                                                                                                                 │ cilium-920584             │ jenkins │ v1.37.0 │ 06 Dec 25 09:34 UTC │                     │
	│ ssh     │ -p cilium-920584 sudo systemctl status containerd --all --full --no-pager                                                                                   │ cilium-920584             │ jenkins │ v1.37.0 │ 06 Dec 25 09:34 UTC │                     │
	│ ssh     │ -p cilium-920584 sudo systemctl cat containerd --no-pager                                                                                                   │ cilium-920584             │ jenkins │ v1.37.0 │ 06 Dec 25 09:34 UTC │                     │
	│ ssh     │ -p cilium-920584 sudo cat /lib/systemd/system/containerd.service                                                                                            │ cilium-920584             │ jenkins │ v1.37.0 │ 06 Dec 25 09:34 UTC │                     │
	│ ssh     │ -p cilium-920584 sudo cat /etc/containerd/config.toml                                                                                                       │ cilium-920584             │ jenkins │ v1.37.0 │ 06 Dec 25 09:34 UTC │                     │
	│ ssh     │ -p cilium-920584 sudo containerd config dump                                                                                                                │ cilium-920584             │ jenkins │ v1.37.0 │ 06 Dec 25 09:34 UTC │                     │
	│ ssh     │ -p cilium-920584 sudo systemctl status crio --all --full --no-pager                                                                                         │ cilium-920584             │ jenkins │ v1.37.0 │ 06 Dec 25 09:34 UTC │                     │
	│ ssh     │ -p cilium-920584 sudo systemctl cat crio --no-pager                                                                                                         │ cilium-920584             │ jenkins │ v1.37.0 │ 06 Dec 25 09:34 UTC │                     │
	│ ssh     │ -p cilium-920584 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                               │ cilium-920584             │ jenkins │ v1.37.0 │ 06 Dec 25 09:34 UTC │                     │
	│ ssh     │ -p cilium-920584 sudo crio config                                                                                                                           │ cilium-920584             │ jenkins │ v1.37.0 │ 06 Dec 25 09:34 UTC │                     │
	│ delete  │ -p cilium-920584                                                                                                                                            │ cilium-920584             │ jenkins │ v1.37.0 │ 06 Dec 25 09:34 UTC │ 06 Dec 25 09:34 UTC │
	│ start   │ -p guest-688206 --no-kubernetes --driver=kvm2  --container-runtime=crio                                                                                     │ guest-688206              │ jenkins │ v1.37.0 │ 06 Dec 25 09:34 UTC │ 06 Dec 25 09:35 UTC │
	│ ssh     │ -p NoKubernetes-030154 sudo systemctl is-active --quiet service kubelet                                                                                     │ NoKubernetes-030154       │ jenkins │ v1.37.0 │ 06 Dec 25 09:34 UTC │                     │
	│ mount   │ /home/jenkins:/minikube-host --profile running-upgrade-044478 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker │ running-upgrade-044478    │ jenkins │ v1.37.0 │ 06 Dec 25 09:35 UTC │                     │
	│ delete  │ -p running-upgrade-044478                                                                                                                                   │ running-upgrade-044478    │ jenkins │ v1.37.0 │ 06 Dec 25 09:35 UTC │ 06 Dec 25 09:35 UTC │
	│ start   │ -p kubernetes-upgrade-460997 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                      │ kubernetes-upgrade-460997 │ jenkins │ v1.37.0 │ 06 Dec 25 09:35 UTC │                     │
	│ stop    │ -p NoKubernetes-030154                                                                                                                                      │ NoKubernetes-030154       │ jenkins │ v1.37.0 │ 06 Dec 25 09:35 UTC │ 06 Dec 25 09:35 UTC │
	│ start   │ -p NoKubernetes-030154 --driver=kvm2  --container-runtime=crio                                                                                              │ NoKubernetes-030154       │ jenkins │ v1.37.0 │ 06 Dec 25 09:35 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:35:22
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:35:22.774553   42128 out.go:345] Setting OutFile to fd 1 ...
	I1206 09:35:22.774910   42128 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1206 09:35:22.774914   42128 out.go:358] Setting ErrFile to fd 2...
	I1206 09:35:22.774918   42128 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1206 09:35:22.775110   42128 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5603/.minikube/bin
	I1206 09:35:22.775670   42128 out.go:352] Setting JSON to false
	I1206 09:35:22.776580   42128 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":4665,"bootTime":1765009058,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:35:22.776644   42128 start.go:139] virtualization: kvm guest
	I1206 09:35:22.778312   42128 out.go:177] * [stopped-upgrade-295047] minikube v1.35.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 09:35:22.779666   42128 out.go:177]   - MINIKUBE_LOCATION=22049
	I1206 09:35:22.779669   42128 notify.go:220] Checking for updates...
	I1206 09:35:22.780813   42128 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:35:22.782827   42128 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-5603/.minikube
	I1206 09:35:22.783770   42128 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:35:22.785022   42128 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:35:22.789667   42128 out.go:177]   - KUBECONFIG=/tmp/legacy_kubeconfig1995796069
	I1206 09:35:22.791382   42128 config.go:182] Loaded profile config "NoKubernetes-030154": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1206 09:35:22.791544   42128 config.go:182] Loaded profile config "guest-688206": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1206 09:35:22.791664   42128 config.go:182] Loaded profile config "kubernetes-upgrade-460997": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1206 09:35:22.791842   42128 config.go:182] Loaded profile config "pause-272844": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:35:22.792038   42128 driver.go:394] Setting default libvirt URI to qemu:///system
	I1206 09:35:22.833954   42128 out.go:177] * Using the kvm2 driver based on user configuration
	I1206 09:35:22.835025   42128 start.go:297] selected driver: kvm2
	I1206 09:35:22.835034   42128 start.go:901] validating driver "kvm2" against <nil>
	I1206 09:35:22.835048   42128 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:35:22.836153   42128 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:35:22.836258   42128 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/22049-5603/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1206 09:35:22.854235   42128 install.go:137] /usr/local/bin/docker-machine-driver-kvm2 version is 1.37.0
	I1206 09:35:22.854290   42128 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1206 09:35:22.854635   42128 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1206 09:35:22.854663   42128 cni.go:84] Creating CNI manager for ""
	I1206 09:35:22.854726   42128 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 09:35:22.854740   42128 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1206 09:35:22.854813   42128 start.go:340] cluster config:
	{Name:stopped-upgrade-295047 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:stopped-upgrade-295047 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:35:22.854937   42128 iso.go:125] acquiring lock: {Name:mke799450cff815011aad774a819eea4fb856d3c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:35:22.856381   42128 out.go:177] * Starting "stopped-upgrade-295047" primary control-plane node in "stopped-upgrade-295047" cluster
	
	
	==> CRI-O <==
	Dec 06 09:35:23 pause-272844 crio[2547]: time="2025-12-06 09:35:23.401722535Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1c240476-3810-49d6-af17-e6fc4ed69e44 name=/runtime.v1.RuntimeService/Version
	Dec 06 09:35:23 pause-272844 crio[2547]: time="2025-12-06 09:35:23.404467376Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=df878d22-65af-42b2-9e34-7687c3ba722a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 09:35:23 pause-272844 crio[2547]: time="2025-12-06 09:35:23.404805986Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765013723404787442,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=df878d22-65af-42b2-9e34-7687c3ba722a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 09:35:23 pause-272844 crio[2547]: time="2025-12-06 09:35:23.405700347Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fd809427-8579-441e-b2d1-7a564294dc1c name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 09:35:23 pause-272844 crio[2547]: time="2025-12-06 09:35:23.406210763Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fd809427-8579-441e-b2d1-7a564294dc1c name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 09:35:23 pause-272844 crio[2547]: time="2025-12-06 09:35:23.406623650Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a70e008e737097382ad5048e14a95d0660903ecec9ee382a96cb326719634b46,PodSandboxId:5834180a255d891045101e0eb52187a9341281c9113b9ebccad68434484836fc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765013714396067216,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-28k4p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59d8afc1-3dcd-4ae7-90a5-6b67e8c2020e,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16b2dde50f602c8157807f55eacc428d762585966948b24ce349777091192b33,PodSandboxId:e5631ab7f2e34758e2270eeacf9c8d926d4fe19b1f171675ac6f14adf0f840f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765013711389491853,Labels
:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-272844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c18b9e1c337242b08043a2d5c75e23b,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac38eff16829fd0d0c91b1838fbe997f15b3aa367eb91ba85cca70d034aecd8d,PodSandboxId:98d027bab3f8c546bb654b3f856d32f066e128ce4e35fc013a7ef6b0c8c8bf89,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5
f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765013711400733135,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-272844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00fe05956219890e3f95be82c81de0e4,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:535ceb9ede5a055482ac920e65bf34bc906e333493f3f6928c9f65b1d1661475,PodSandboxId:645ad8db890e1ea94c609d08bd7247aab63d46c1051a68720d536baa65da00a9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e
86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765013688520007585,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6p7rh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3359dd42-379a-4c92-8146-465d519ab5a1,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16ff8a41ee03d97e00f64bac7e8d01b02620206dc18cbb1f992a1324a3d2808e,PodSandboxId:a85f57a2fabeb6456fd4bc6400ab08393365a6a7ecaecea92c9ada1aba24d403,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765013688425044767,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-272844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 964525ffd08fbde6270e57ecf995d051,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eb8eccbfdd039b2a3ca7a044ee9dd43890db617eee8796dcbb720d46d9ae8f0,PodSandboxId:750d6485343688a501e19d468be5d2f88ae6845eac5fcb3798b33b53bbbfdbaf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:883
20b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765013688319425897,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-272844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 121786fb98873f9b0be41b77da0f836c,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9eefcf9c509f993cb4916b932a1febe3c6c001d4f4bac7e9d02e859875d6d5c,PodSandboxId:e5631ab7f2e34758e2270eeacf9c8d926d4fe19b1f1716
75ac6f14adf0f840f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_EXITED,CreatedAt:1765013688295983402,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-272844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c18b9e1c337242b08043a2d5c75e23b,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contai
ner{Id:17070db2f8f2dd0a499909c06c4ac4c330a2ecc3d4fa881f7f4467d0a1f56638,PodSandboxId:98d027bab3f8c546bb654b3f856d32f066e128ce4e35fc013a7ef6b0c8c8bf89,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_EXITED,CreatedAt:1765013688244556166,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-272844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00fe05956219890e3f95be82c81de0e4,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cb81cd535b67d209a36ac0ef24eec735d25bcb5e170e0383cbe6939107cc90a,PodSandboxId:b8a1c890443c703b96b1d7c10a843e0ee1a59911f8e167a33563e3be9aefd6d6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765013622375691626,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-28k4p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59d8afc1-3dcd-4ae7-90a5-6b67e8c2020e,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"
TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcae0ad6f528493f7d768a91eb0b5640ab811469bd4881b8d266de4c65a1530e,PodSandboxId:dc5c8b2bf48b1fbeeccf7a7ca3baa9698f9d337ff0403eeaed1b38a42535142b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_EXITED,CreatedAt:1765013621856200358,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name:
kube-proxy-6p7rh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3359dd42-379a-4c92-8146-465d519ab5a1,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:906109c794a06106da1a38cc94530358d3a6b3fa0ebdb8adbb8293f6959a69ab,PodSandboxId:67a78dca14e2ac39f52d6af8b15998bbaacc5b29450a8435bd1544f4e6a29083,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765013610187902668,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-272844,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 964525ffd08fbde6270e57ecf995d051,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1762c3811ed0921bcedf9f48e0729e79858afdbc1a0c39a6db287901e95b83f5,PodSandboxId:02407ed82ca36063f82492c57241580e7555a79fef4b6b8d79716c4e05408d39,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_EXITED,CreatedAt:1765013610096835460,Labels:map[string]string{io.kubernetes.containe
r.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-272844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 121786fb98873f9b0be41b77da0f836c,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fd809427-8579-441e-b2d1-7a564294dc1c name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 09:35:23 pause-272844 crio[2547]: time="2025-12-06 09:35:23.444602634Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c5e57725-7895-4092-bcc0-a56fa27777c4 name=/runtime.v1.RuntimeService/Version
	Dec 06 09:35:23 pause-272844 crio[2547]: time="2025-12-06 09:35:23.444694896Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c5e57725-7895-4092-bcc0-a56fa27777c4 name=/runtime.v1.RuntimeService/Version
	Dec 06 09:35:23 pause-272844 crio[2547]: time="2025-12-06 09:35:23.446561789Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4666e0f2-aea9-451e-bbdc-33235c746ac6 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 09:35:23 pause-272844 crio[2547]: time="2025-12-06 09:35:23.447695372Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765013723447612179,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4666e0f2-aea9-451e-bbdc-33235c746ac6 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 09:35:23 pause-272844 crio[2547]: time="2025-12-06 09:35:23.448959156Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=44cf3471-4de7-4d53-8df4-c5297dfa8b07 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 09:35:23 pause-272844 crio[2547]: time="2025-12-06 09:35:23.449238639Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=44cf3471-4de7-4d53-8df4-c5297dfa8b07 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 09:35:23 pause-272844 crio[2547]: time="2025-12-06 09:35:23.449992713Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a70e008e737097382ad5048e14a95d0660903ecec9ee382a96cb326719634b46,PodSandboxId:5834180a255d891045101e0eb52187a9341281c9113b9ebccad68434484836fc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765013714396067216,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-28k4p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59d8afc1-3dcd-4ae7-90a5-6b67e8c2020e,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16b2dde50f602c8157807f55eacc428d762585966948b24ce349777091192b33,PodSandboxId:e5631ab7f2e34758e2270eeacf9c8d926d4fe19b1f171675ac6f14adf0f840f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765013711389491853,Labels
:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-272844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c18b9e1c337242b08043a2d5c75e23b,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac38eff16829fd0d0c91b1838fbe997f15b3aa367eb91ba85cca70d034aecd8d,PodSandboxId:98d027bab3f8c546bb654b3f856d32f066e128ce4e35fc013a7ef6b0c8c8bf89,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5
f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765013711400733135,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-272844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00fe05956219890e3f95be82c81de0e4,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:535ceb9ede5a055482ac920e65bf34bc906e333493f3f6928c9f65b1d1661475,PodSandboxId:645ad8db890e1ea94c609d08bd7247aab63d46c1051a68720d536baa65da00a9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e
86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765013688520007585,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6p7rh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3359dd42-379a-4c92-8146-465d519ab5a1,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16ff8a41ee03d97e00f64bac7e8d01b02620206dc18cbb1f992a1324a3d2808e,PodSandboxId:a85f57a2fabeb6456fd4bc6400ab08393365a6a7ecaecea92c9ada1aba24d403,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765013688425044767,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-272844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 964525ffd08fbde6270e57ecf995d051,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eb8eccbfdd039b2a3ca7a044ee9dd43890db617eee8796dcbb720d46d9ae8f0,PodSandboxId:750d6485343688a501e19d468be5d2f88ae6845eac5fcb3798b33b53bbbfdbaf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:883
20b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765013688319425897,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-272844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 121786fb98873f9b0be41b77da0f836c,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9eefcf9c509f993cb4916b932a1febe3c6c001d4f4bac7e9d02e859875d6d5c,PodSandboxId:e5631ab7f2e34758e2270eeacf9c8d926d4fe19b1f1716
75ac6f14adf0f840f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_EXITED,CreatedAt:1765013688295983402,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-272844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c18b9e1c337242b08043a2d5c75e23b,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contai
ner{Id:17070db2f8f2dd0a499909c06c4ac4c330a2ecc3d4fa881f7f4467d0a1f56638,PodSandboxId:98d027bab3f8c546bb654b3f856d32f066e128ce4e35fc013a7ef6b0c8c8bf89,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_EXITED,CreatedAt:1765013688244556166,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-272844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00fe05956219890e3f95be82c81de0e4,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cb81cd535b67d209a36ac0ef24eec735d25bcb5e170e0383cbe6939107cc90a,PodSandboxId:b8a1c890443c703b96b1d7c10a843e0ee1a59911f8e167a33563e3be9aefd6d6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765013622375691626,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-28k4p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59d8afc1-3dcd-4ae7-90a5-6b67e8c2020e,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"
TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcae0ad6f528493f7d768a91eb0b5640ab811469bd4881b8d266de4c65a1530e,PodSandboxId:dc5c8b2bf48b1fbeeccf7a7ca3baa9698f9d337ff0403eeaed1b38a42535142b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_EXITED,CreatedAt:1765013621856200358,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name:
kube-proxy-6p7rh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3359dd42-379a-4c92-8146-465d519ab5a1,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:906109c794a06106da1a38cc94530358d3a6b3fa0ebdb8adbb8293f6959a69ab,PodSandboxId:67a78dca14e2ac39f52d6af8b15998bbaacc5b29450a8435bd1544f4e6a29083,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765013610187902668,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-272844,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 964525ffd08fbde6270e57ecf995d051,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1762c3811ed0921bcedf9f48e0729e79858afdbc1a0c39a6db287901e95b83f5,PodSandboxId:02407ed82ca36063f82492c57241580e7555a79fef4b6b8d79716c4e05408d39,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_EXITED,CreatedAt:1765013610096835460,Labels:map[string]string{io.kubernetes.containe
r.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-272844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 121786fb98873f9b0be41b77da0f836c,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=44cf3471-4de7-4d53-8df4-c5297dfa8b07 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 09:35:23 pause-272844 crio[2547]: time="2025-12-06 09:35:23.498181951Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=35256971-5131-4b23-86bd-9dd48e438d22 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 06 09:35:23 pause-272844 crio[2547]: time="2025-12-06 09:35:23.501357268Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:750d6485343688a501e19d468be5d2f88ae6845eac5fcb3798b33b53bbbfdbaf,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-272844,Uid:121786fb98873f9b0be41b77da0f836c,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1765013687788745972,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-272844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 121786fb98873f9b0be41b77da0f836c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 121786fb98873f9b0be41b77da0f836c,kubernetes.io/config.seen: 2025-12-06T09:33:36.105574397Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:645ad8db890e1ea94c609d08bd7247aab63d46c1051a68720d536baa65da00a9,Metadata:&PodSandboxMetadata{Name:kube-proxy-6p7rh,Uid:3359dd42-
379a-4c92-8146-465d519ab5a1,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1765013687787654018,Labels:map[string]string{controller-revision-hash: 66d5f8d6f6,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-6p7rh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3359dd42-379a-4c92-8146-465d519ab5a1,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-06T09:33:41.362630160Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:98d027bab3f8c546bb654b3f856d32f066e128ce4e35fc013a7ef6b0c8c8bf89,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-272844,Uid:00fe05956219890e3f95be82c81de0e4,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1765013687779095471,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-272844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00fe05956219890e3f95
be82c81de0e4,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.157:8443,kubernetes.io/config.hash: 00fe05956219890e3f95be82c81de0e4,kubernetes.io/config.seen: 2025-12-06T09:33:36.105563705Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5834180a255d891045101e0eb52187a9341281c9113b9ebccad68434484836fc,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-28k4p,Uid:59d8afc1-3dcd-4ae7-90a5-6b67e8c2020e,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1765013687778641469,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-28k4p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59d8afc1-3dcd-4ae7-90a5-6b67e8c2020e,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-06T09:33:41.617013391Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a85f57a2fabeb6456fd4bc6
400ab08393365a6a7ecaecea92c9ada1aba24d403,Metadata:&PodSandboxMetadata{Name:etcd-pause-272844,Uid:964525ffd08fbde6270e57ecf995d051,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1765013687767036420,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-272844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 964525ffd08fbde6270e57ecf995d051,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.157:2379,kubernetes.io/config.hash: 964525ffd08fbde6270e57ecf995d051,kubernetes.io/config.seen: 2025-12-06T09:33:36.105575250Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e5631ab7f2e34758e2270eeacf9c8d926d4fe19b1f171675ac6f14adf0f840f4,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-272844,Uid:7c18b9e1c337242b08043a2d5c75e23b,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1765013687755877065,Labels:map[string]str
ing{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-272844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c18b9e1c337242b08043a2d5c75e23b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 7c18b9e1c337242b08043a2d5c75e23b,kubernetes.io/config.seen: 2025-12-06T09:33:36.105573467Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b8a1c890443c703b96b1d7c10a843e0ee1a59911f8e167a33563e3be9aefd6d6,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-28k4p,Uid:59d8afc1-3dcd-4ae7-90a5-6b67e8c2020e,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1765013621941928094,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-28k4p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59d8afc1-3dcd-4ae7-90a5-6b67e8c2020e,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io
/config.seen: 2025-12-06T09:33:41.617013391Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:dc5c8b2bf48b1fbeeccf7a7ca3baa9698f9d337ff0403eeaed1b38a42535142b,Metadata:&PodSandboxMetadata{Name:kube-proxy-6p7rh,Uid:3359dd42-379a-4c92-8146-465d519ab5a1,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1765013621690576150,Labels:map[string]string{controller-revision-hash: 66d5f8d6f6,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-6p7rh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3359dd42-379a-4c92-8146-465d519ab5a1,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-06T09:33:41.362630160Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6acf512432a251bc183a49fae3b8e39ec28b31e8337014a7b06af20d47ac779c,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-272844,Uid:7c18b9e1c337242b08043a2d5c75e23b,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOT
READY,CreatedAt:1765013609906954918,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-272844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c18b9e1c337242b08043a2d5c75e23b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 7c18b9e1c337242b08043a2d5c75e23b,kubernetes.io/config.seen: 2025-12-06T09:33:29.351836543Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:02407ed82ca36063f82492c57241580e7555a79fef4b6b8d79716c4e05408d39,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-272844,Uid:121786fb98873f9b0be41b77da0f836c,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1765013609901914280,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-272844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 121786fb98873f9b0be41b77da0f836c,tier: contro
l-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 121786fb98873f9b0be41b77da0f836c,kubernetes.io/config.seen: 2025-12-06T09:33:29.351838571Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:cc58dd155c413a48c0e3526076864cca30388b3da4a03db1771c154b822bfd74,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-272844,Uid:00fe05956219890e3f95be82c81de0e4,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1765013609895691178,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-272844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00fe05956219890e3f95be82c81de0e4,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.157:8443,kubernetes.io/config.hash: 00fe05956219890e3f95be82c81de0e4,kubernetes.io/config.seen: 2025-12-06T09:33:29.351831834Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&Po
dSandbox{Id:67a78dca14e2ac39f52d6af8b15998bbaacc5b29450a8435bd1544f4e6a29083,Metadata:&PodSandboxMetadata{Name:etcd-pause-272844,Uid:964525ffd08fbde6270e57ecf995d051,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1765013609888555491,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-272844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 964525ffd08fbde6270e57ecf995d051,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.157:2379,kubernetes.io/config.hash: 964525ffd08fbde6270e57ecf995d051,kubernetes.io/config.seen: 2025-12-06T09:33:29.351841002Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=35256971-5131-4b23-86bd-9dd48e438d22 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 06 09:35:23 pause-272844 crio[2547]: time="2025-12-06 09:35:23.499073203Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2328633f-ca48-4f8a-b01a-e6c838fdfe60 name=/runtime.v1.RuntimeService/Version
	Dec 06 09:35:23 pause-272844 crio[2547]: time="2025-12-06 09:35:23.501699525Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2328633f-ca48-4f8a-b01a-e6c838fdfe60 name=/runtime.v1.RuntimeService/Version
	Dec 06 09:35:23 pause-272844 crio[2547]: time="2025-12-06 09:35:23.504662252Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0d44de9b-6804-46fd-b064-167f123aa5e1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 09:35:23 pause-272844 crio[2547]: time="2025-12-06 09:35:23.505187672Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0d44de9b-6804-46fd-b064-167f123aa5e1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 09:35:23 pause-272844 crio[2547]: time="2025-12-06 09:35:23.506115622Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d4e03578-c796-4b8e-8913-b7ce1494d0b6 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 09:35:23 pause-272844 crio[2547]: time="2025-12-06 09:35:23.507761927Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a70e008e737097382ad5048e14a95d0660903ecec9ee382a96cb326719634b46,PodSandboxId:5834180a255d891045101e0eb52187a9341281c9113b9ebccad68434484836fc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765013714396067216,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-28k4p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59d8afc1-3dcd-4ae7-90a5-6b67e8c2020e,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16b2dde50f602c8157807f55eacc428d762585966948b24ce349777091192b33,PodSandboxId:e5631ab7f2e34758e2270eeacf9c8d926d4fe19b1f171675ac6f14adf0f840f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765013711389491853,Labels
:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-272844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c18b9e1c337242b08043a2d5c75e23b,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac38eff16829fd0d0c91b1838fbe997f15b3aa367eb91ba85cca70d034aecd8d,PodSandboxId:98d027bab3f8c546bb654b3f856d32f066e128ce4e35fc013a7ef6b0c8c8bf89,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5
f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765013711400733135,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-272844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00fe05956219890e3f95be82c81de0e4,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:535ceb9ede5a055482ac920e65bf34bc906e333493f3f6928c9f65b1d1661475,PodSandboxId:645ad8db890e1ea94c609d08bd7247aab63d46c1051a68720d536baa65da00a9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e
86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765013688520007585,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6p7rh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3359dd42-379a-4c92-8146-465d519ab5a1,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16ff8a41ee03d97e00f64bac7e8d01b02620206dc18cbb1f992a1324a3d2808e,PodSandboxId:a85f57a2fabeb6456fd4bc6400ab08393365a6a7ecaecea92c9ada1aba24d403,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765013688425044767,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-272844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 964525ffd08fbde6270e57ecf995d051,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eb8eccbfdd039b2a3ca7a044ee9dd43890db617eee8796dcbb720d46d9ae8f0,PodSandboxId:750d6485343688a501e19d468be5d2f88ae6845eac5fcb3798b33b53bbbfdbaf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:883
20b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765013688319425897,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-272844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 121786fb98873f9b0be41b77da0f836c,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9eefcf9c509f993cb4916b932a1febe3c6c001d4f4bac7e9d02e859875d6d5c,PodSandboxId:e5631ab7f2e34758e2270eeacf9c8d926d4fe19b1f1716
75ac6f14adf0f840f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_EXITED,CreatedAt:1765013688295983402,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-272844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c18b9e1c337242b08043a2d5c75e23b,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contai
ner{Id:17070db2f8f2dd0a499909c06c4ac4c330a2ecc3d4fa881f7f4467d0a1f56638,PodSandboxId:98d027bab3f8c546bb654b3f856d32f066e128ce4e35fc013a7ef6b0c8c8bf89,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_EXITED,CreatedAt:1765013688244556166,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-272844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00fe05956219890e3f95be82c81de0e4,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cb81cd535b67d209a36ac0ef24eec735d25bcb5e170e0383cbe6939107cc90a,PodSandboxId:b8a1c890443c703b96b1d7c10a843e0ee1a59911f8e167a33563e3be9aefd6d6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765013622375691626,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-28k4p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59d8afc1-3dcd-4ae7-90a5-6b67e8c2020e,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"
TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcae0ad6f528493f7d768a91eb0b5640ab811469bd4881b8d266de4c65a1530e,PodSandboxId:dc5c8b2bf48b1fbeeccf7a7ca3baa9698f9d337ff0403eeaed1b38a42535142b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_EXITED,CreatedAt:1765013621856200358,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name:
kube-proxy-6p7rh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3359dd42-379a-4c92-8146-465d519ab5a1,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:906109c794a06106da1a38cc94530358d3a6b3fa0ebdb8adbb8293f6959a69ab,PodSandboxId:67a78dca14e2ac39f52d6af8b15998bbaacc5b29450a8435bd1544f4e6a29083,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765013610187902668,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-272844,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 964525ffd08fbde6270e57ecf995d051,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1762c3811ed0921bcedf9f48e0729e79858afdbc1a0c39a6db287901e95b83f5,PodSandboxId:02407ed82ca36063f82492c57241580e7555a79fef4b6b8d79716c4e05408d39,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_EXITED,CreatedAt:1765013610096835460,Labels:map[string]string{io.kubernetes.containe
r.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-272844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 121786fb98873f9b0be41b77da0f836c,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0d44de9b-6804-46fd-b064-167f123aa5e1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 09:35:23 pause-272844 crio[2547]: time="2025-12-06 09:35:23.508189591Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765013723508160372,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d4e03578-c796-4b8e-8913-b7ce1494d0b6 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 09:35:23 pause-272844 crio[2547]: time="2025-12-06 09:35:23.510521055Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0d24e9a6-6c16-4e97-a65b-0c289a6af293 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 09:35:23 pause-272844 crio[2547]: time="2025-12-06 09:35:23.510615955Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0d24e9a6-6c16-4e97-a65b-0c289a6af293 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 09:35:23 pause-272844 crio[2547]: time="2025-12-06 09:35:23.510971100Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a70e008e737097382ad5048e14a95d0660903ecec9ee382a96cb326719634b46,PodSandboxId:5834180a255d891045101e0eb52187a9341281c9113b9ebccad68434484836fc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765013714396067216,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-28k4p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59d8afc1-3dcd-4ae7-90a5-6b67e8c2020e,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16b2dde50f602c8157807f55eacc428d762585966948b24ce349777091192b33,PodSandboxId:e5631ab7f2e34758e2270eeacf9c8d926d4fe19b1f171675ac6f14adf0f840f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765013711389491853,Labels
:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-272844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c18b9e1c337242b08043a2d5c75e23b,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac38eff16829fd0d0c91b1838fbe997f15b3aa367eb91ba85cca70d034aecd8d,PodSandboxId:98d027bab3f8c546bb654b3f856d32f066e128ce4e35fc013a7ef6b0c8c8bf89,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5
f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765013711400733135,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-272844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00fe05956219890e3f95be82c81de0e4,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:535ceb9ede5a055482ac920e65bf34bc906e333493f3f6928c9f65b1d1661475,PodSandboxId:645ad8db890e1ea94c609d08bd7247aab63d46c1051a68720d536baa65da00a9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e
86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765013688520007585,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6p7rh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3359dd42-379a-4c92-8146-465d519ab5a1,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16ff8a41ee03d97e00f64bac7e8d01b02620206dc18cbb1f992a1324a3d2808e,PodSandboxId:a85f57a2fabeb6456fd4bc6400ab08393365a6a7ecaecea92c9ada1aba24d403,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765013688425044767,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-272844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 964525ffd08fbde6270e57ecf995d051,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eb8eccbfdd039b2a3ca7a044ee9dd43890db617eee8796dcbb720d46d9ae8f0,PodSandboxId:750d6485343688a501e19d468be5d2f88ae6845eac5fcb3798b33b53bbbfdbaf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:883
20b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765013688319425897,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-272844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 121786fb98873f9b0be41b77da0f836c,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9eefcf9c509f993cb4916b932a1febe3c6c001d4f4bac7e9d02e859875d6d5c,PodSandboxId:e5631ab7f2e34758e2270eeacf9c8d926d4fe19b1f1716
75ac6f14adf0f840f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_EXITED,CreatedAt:1765013688295983402,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-272844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c18b9e1c337242b08043a2d5c75e23b,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contai
ner{Id:17070db2f8f2dd0a499909c06c4ac4c330a2ecc3d4fa881f7f4467d0a1f56638,PodSandboxId:98d027bab3f8c546bb654b3f856d32f066e128ce4e35fc013a7ef6b0c8c8bf89,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_EXITED,CreatedAt:1765013688244556166,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-272844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00fe05956219890e3f95be82c81de0e4,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cb81cd535b67d209a36ac0ef24eec735d25bcb5e170e0383cbe6939107cc90a,PodSandboxId:b8a1c890443c703b96b1d7c10a843e0ee1a59911f8e167a33563e3be9aefd6d6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765013622375691626,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-28k4p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59d8afc1-3dcd-4ae7-90a5-6b67e8c2020e,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"
TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcae0ad6f528493f7d768a91eb0b5640ab811469bd4881b8d266de4c65a1530e,PodSandboxId:dc5c8b2bf48b1fbeeccf7a7ca3baa9698f9d337ff0403eeaed1b38a42535142b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_EXITED,CreatedAt:1765013621856200358,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name:
kube-proxy-6p7rh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3359dd42-379a-4c92-8146-465d519ab5a1,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:906109c794a06106da1a38cc94530358d3a6b3fa0ebdb8adbb8293f6959a69ab,PodSandboxId:67a78dca14e2ac39f52d6af8b15998bbaacc5b29450a8435bd1544f4e6a29083,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765013610187902668,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-272844,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 964525ffd08fbde6270e57ecf995d051,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1762c3811ed0921bcedf9f48e0729e79858afdbc1a0c39a6db287901e95b83f5,PodSandboxId:02407ed82ca36063f82492c57241580e7555a79fef4b6b8d79716c4e05408d39,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_EXITED,CreatedAt:1765013610096835460,Labels:map[string]string{io.kubernetes.containe
r.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-272844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 121786fb98873f9b0be41b77da0f836c,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0d24e9a6-6c16-4e97-a65b-0c289a6af293 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	a70e008e73709       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   9 seconds ago        Running             coredns                   1                   5834180a255d8       coredns-66bc5c9577-28k4p               kube-system
	ac38eff16829f       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   12 seconds ago       Running             kube-apiserver            2                   98d027bab3f8c       kube-apiserver-pause-272844            kube-system
	16b2dde50f602       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   12 seconds ago       Running             kube-controller-manager   2                   e5631ab7f2e34       kube-controller-manager-pause-272844   kube-system
	535ceb9ede5a0       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   35 seconds ago       Running             kube-proxy                1                   645ad8db890e1       kube-proxy-6p7rh                       kube-system
	16ff8a41ee03d       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   35 seconds ago       Running             etcd                      1                   a85f57a2fabeb       etcd-pause-272844                      kube-system
	1eb8eccbfdd03       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   35 seconds ago       Running             kube-scheduler            1                   750d648534368       kube-scheduler-pause-272844            kube-system
	b9eefcf9c509f       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   35 seconds ago       Exited              kube-controller-manager   1                   e5631ab7f2e34       kube-controller-manager-pause-272844   kube-system
	17070db2f8f2d       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   35 seconds ago       Exited              kube-apiserver            1                   98d027bab3f8c       kube-apiserver-pause-272844            kube-system
	5cb81cd535b67       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   About a minute ago   Exited              coredns                   0                   b8a1c890443c7       coredns-66bc5c9577-28k4p               kube-system
	fcae0ad6f5284       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   About a minute ago   Exited              kube-proxy                0                   dc5c8b2bf48b1       kube-proxy-6p7rh                       kube-system
	906109c794a06       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   About a minute ago   Exited              etcd                      0                   67a78dca14e2a       etcd-pause-272844                      kube-system
	1762c3811ed09       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   About a minute ago   Exited              kube-scheduler            0                   02407ed82ca36       kube-scheduler-pause-272844            kube-system
	
	
	==> coredns [5cb81cd535b67d209a36ac0ef24eec735d25bcb5e170e0383cbe6939107cc90a] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54623 - 49864 "HINFO IN 6801714884003529768.4335844317188538113. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.032677052s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a70e008e737097382ad5048e14a95d0660903ecec9ee382a96cb326719634b46] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52882 - 18090 "HINFO IN 2848364569909425307.5377329861672936372. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.141042576s
	
	
	==> describe nodes <==
	Name:               pause-272844
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-272844
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9c863e42b877bb840aec81dfcdcbf173a0ac5fb9
	                    minikube.k8s.io/name=pause-272844
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_06T09_33_36_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 06 Dec 2025 09:33:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-272844
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 06 Dec 2025 09:35:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 06 Dec 2025 09:35:13 +0000   Sat, 06 Dec 2025 09:33:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 06 Dec 2025 09:35:13 +0000   Sat, 06 Dec 2025 09:33:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 06 Dec 2025 09:35:13 +0000   Sat, 06 Dec 2025 09:33:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 06 Dec 2025 09:35:13 +0000   Sat, 06 Dec 2025 09:33:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.157
	  Hostname:    pause-272844
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035908Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035908Ki
	  pods:               110
	System Info:
	  Machine ID:                 526bf343573545918f362d103dc54c5d
	  System UUID:                526bf343-5735-4591-8f36-2d103dc54c5d
	  Boot ID:                    3dc0539b-cd30-4287-beed-5b8708964a60
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-28k4p                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     102s
	  kube-system                 etcd-pause-272844                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         107s
	  kube-system                 kube-apiserver-pause-272844             250m (12%)    0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-controller-manager-pause-272844    200m (10%)    0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-proxy-6p7rh                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kube-system                 kube-scheduler-pause-272844             100m (5%)     0 (0%)      0 (0%)           0 (0%)         107s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 101s                 kube-proxy       
	  Normal  Starting                 9s                   kube-proxy       
	  Normal  Starting                 114s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  114s (x8 over 114s)  kubelet          Node pause-272844 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    114s (x8 over 114s)  kubelet          Node pause-272844 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     114s (x7 over 114s)  kubelet          Node pause-272844 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  114s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    107s                 kubelet          Node pause-272844 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  107s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  107s                 kubelet          Node pause-272844 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     107s                 kubelet          Node pause-272844 status is now: NodeHasSufficientPID
	  Normal  Starting                 107s                 kubelet          Starting kubelet.
	  Normal  NodeReady                106s                 kubelet          Node pause-272844 status is now: NodeReady
	  Normal  RegisteredNode           103s                 node-controller  Node pause-272844 event: Registered Node pause-272844 in Controller
	  Normal  Starting                 32s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  32s (x8 over 32s)    kubelet          Node pause-272844 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    32s (x8 over 32s)    kubelet          Node pause-272844 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     32s (x7 over 32s)    kubelet          Node pause-272844 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  32s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6s                   node-controller  Node pause-272844 event: Registered Node pause-272844 in Controller
	
	
	==> dmesg <==
	[Dec 6 09:33] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000042] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.005297] (rpcbind)[120]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.182254] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.087120] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.105984] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.127791] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.921621] kauditd_printk_skb: 18 callbacks suppressed
	[Dec 6 09:34] kauditd_printk_skb: 190 callbacks suppressed
	[  +7.031216] kauditd_printk_skb: 56 callbacks suppressed
	[Dec 6 09:35] kauditd_printk_skb: 260 callbacks suppressed
	[  +1.789479] kauditd_printk_skb: 43 callbacks suppressed
	
	
	==> etcd [16ff8a41ee03d97e00f64bac7e8d01b02620206dc18cbb1f992a1324a3d2808e] <==
	{"level":"warn","ts":"2025-12-06T09:35:12.610849Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:35:12.628947Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:35:12.647968Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:35:12.662048Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:35:12.674040Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:35:12.684625Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:35:12.718413Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:35:12.725492Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:35:12.735376Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:35:12.746648Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:35:12.758558Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:35:12.769930Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:35:12.783937Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:35:12.805473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:35:12.816569Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:35:12.835493Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:35:12.864415Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:35:12.866473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:35:12.877613Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:35:12.883380Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:35:12.894093Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:35:12.905977Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:35:12.917069Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:35:12.926731Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:35:13.007816Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46620","server-name":"","error":"EOF"}
	
	
	==> etcd [906109c794a06106da1a38cc94530358d3a6b3fa0ebdb8adbb8293f6959a69ab] <==
	{"level":"info","ts":"2025-12-06T09:33:45.462943Z","caller":"traceutil/trace.go:172","msg":"trace[188685724] linearizableReadLoop","detail":"{readStateIndex:424; appliedIndex:424; }","duration":"122.219563ms","start":"2025-12-06T09:33:45.340704Z","end":"2025-12-06T09:33:45.462923Z","steps":["trace[188685724] 'read index received'  (duration: 122.214113ms)","trace[188685724] 'applied index is now lower than readState.Index'  (duration: 4.639µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-06T09:33:45.591461Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"250.727961ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-272844\" limit:1 ","response":"range_response_count:1 size:5280"}
	{"level":"info","ts":"2025-12-06T09:33:45.591622Z","caller":"traceutil/trace.go:172","msg":"trace[84994929] range","detail":"{range_begin:/registry/minions/pause-272844; range_end:; response_count:1; response_revision:411; }","duration":"250.909559ms","start":"2025-12-06T09:33:45.340701Z","end":"2025-12-06T09:33:45.591610Z","steps":["trace[84994929] 'agreement among raft nodes before linearized reading'  (duration: 122.384808ms)","trace[84994929] 'range keys from in-memory index tree'  (duration: 128.284267ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-06T09:33:45.592410Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"128.568537ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8706191398168190689 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.50.157\" mod_revision:240 > success:<request_put:<key:\"/registry/masterleases/192.168.50.157\" value_size:67 lease:8706191398168190686 >> failure:<request_range:<key:\"/registry/masterleases/192.168.50.157\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-12-06T09:33:45.592539Z","caller":"traceutil/trace.go:172","msg":"trace[1829246113] transaction","detail":"{read_only:false; response_revision:412; number_of_response:1; }","duration":"253.140578ms","start":"2025-12-06T09:33:45.339355Z","end":"2025-12-06T09:33:45.592495Z","steps":["trace[1829246113] 'process raft request'  (duration: 123.841539ms)","trace[1829246113] 'compare'  (duration: 128.116218ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-06T09:34:10.734008Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"130.833017ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8706191398168190890 > lease_revoke:<id:78d29af302605f31>","response":"size:28"}
	{"level":"info","ts":"2025-12-06T09:34:25.015164Z","caller":"traceutil/trace.go:172","msg":"trace[1608256571] transaction","detail":"{read_only:false; response_revision:436; number_of_response:1; }","duration":"247.375274ms","start":"2025-12-06T09:34:24.767773Z","end":"2025-12-06T09:34:25.015149Z","steps":["trace[1608256571] 'process raft request'  (duration: 246.59802ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-06T09:34:31.925893Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-06T09:34:31.925957Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-272844","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.157:2380"],"advertise-client-urls":["https://192.168.50.157:2379"]}
	{"level":"error","ts":"2025-12-06T09:34:31.926058Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-06T09:34:32.008385Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-06T09:34:32.009837Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-06T09:34:32.009926Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"e23094fafc8078d2","current-leader-member-id":"e23094fafc8078d2"}
	{"level":"info","ts":"2025-12-06T09:34:32.010002Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-12-06T09:34:32.010031Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-12-06T09:34:32.010365Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-06T09:34:32.010459Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-06T09:34:32.010473Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-06T09:34:32.010524Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.157:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-06T09:34:32.010536Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.157:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-06T09:34:32.010548Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.50.157:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-06T09:34:32.013063Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.50.157:2380"}
	{"level":"error","ts":"2025-12-06T09:34:32.013102Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.50.157:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-06T09:34:32.013119Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.50.157:2380"}
	{"level":"info","ts":"2025-12-06T09:34:32.013124Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-272844","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.157:2380"],"advertise-client-urls":["https://192.168.50.157:2379"]}
	
	
	==> kernel <==
	 09:35:23 up 2 min,  0 users,  load average: 0.81, 0.53, 0.21
	Linux pause-272844 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Dec  4 13:30:13 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [17070db2f8f2dd0a499909c06c4ac4c330a2ecc3d4fa881f7f4467d0a1f56638] <==
	I1206 09:34:50.010556       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	W1206 09:34:50.010652       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 09:34:50.010713       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I1206 09:34:50.027524       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1206 09:34:50.065044       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I1206 09:34:50.067314       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1206 09:34:50.067703       1 instance.go:239] Using reconciler: lease
	W1206 09:34:50.070943       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1206 09:34:50.071094       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 09:34:51.012681       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 09:34:51.012834       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 09:34:51.075383       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 09:34:52.395809       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 09:34:52.514068       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 09:34:52.818444       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 09:34:54.700599       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 09:34:54.934513       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 09:34:55.022779       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 09:34:58.348414       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 09:34:58.535649       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 09:34:59.094774       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 09:35:05.030188       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 09:35:05.233397       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 09:35:06.141701       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F1206 09:35:10.068860       1 instance.go:232] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [ac38eff16829fd0d0c91b1838fbe997f15b3aa367eb91ba85cca70d034aecd8d] <==
	I1206 09:35:13.746238       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1206 09:35:13.746144       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1206 09:35:13.746159       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1206 09:35:13.750475       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1206 09:35:13.750589       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1206 09:35:13.750825       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1206 09:35:13.754792       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1206 09:35:13.756474       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	E1206 09:35:13.761156       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1206 09:35:13.764333       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1206 09:35:13.764432       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1206 09:35:13.764354       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1206 09:35:13.766846       1 aggregator.go:171] initial CRD sync complete...
	I1206 09:35:13.766853       1 autoregister_controller.go:144] Starting autoregister controller
	I1206 09:35:13.766858       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1206 09:35:13.766862       1 cache.go:39] Caches are synced for autoregister controller
	I1206 09:35:13.800513       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1206 09:35:14.199971       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1206 09:35:14.559821       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1206 09:35:15.360355       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1206 09:35:15.408853       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1206 09:35:15.460474       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1206 09:35:15.470487       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1206 09:35:17.353449       1 controller.go:667] quota admission added evaluator for: endpoints
	I1206 09:35:17.400849       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [16b2dde50f602c8157807f55eacc428d762585966948b24ce349777091192b33] <==
	I1206 09:35:17.099375       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1206 09:35:17.100601       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1206 09:35:17.101768       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1206 09:35:17.103027       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1206 09:35:17.105337       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1206 09:35:17.106158       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1206 09:35:17.107456       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1206 09:35:17.108641       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1206 09:35:17.111551       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1206 09:35:17.111696       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1206 09:35:17.111749       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1206 09:35:17.111594       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1206 09:35:17.115159       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1206 09:35:17.117928       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1206 09:35:17.119141       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1206 09:35:17.126484       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1206 09:35:17.132503       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1206 09:35:17.136685       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1206 09:35:17.136997       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1206 09:35:17.137373       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1206 09:35:17.142218       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1206 09:35:17.145083       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1206 09:35:17.149236       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1206 09:35:17.149747       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1206 09:35:17.154707       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-controller-manager [b9eefcf9c509f993cb4916b932a1febe3c6c001d4f4bac7e9d02e859875d6d5c] <==
	I1206 09:34:49.691916       1 serving.go:386] Generated self-signed cert in-memory
	I1206 09:34:50.967125       1 controllermanager.go:191] "Starting" version="v1.34.2"
	I1206 09:34:50.967168       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:34:50.970586       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1206 09:34:50.970812       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1206 09:34:50.971097       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1206 09:34:50.971423       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1206 09:35:11.077715       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.50.157:8443/healthz\": dial tcp 192.168.50.157:8443: connect: connection refused"
	
	
	==> kube-proxy [535ceb9ede5a055482ac920e65bf34bc906e333493f3f6928c9f65b1d1661475] <==
	I1206 09:35:14.536533       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1206 09:35:14.636817       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1206 09:35:14.636867       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.50.157"]
	E1206 09:35:14.636944       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 09:35:14.711833       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1206 09:35:14.711905       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1206 09:35:14.711926       1 server_linux.go:132] "Using iptables Proxier"
	I1206 09:35:14.726743       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 09:35:14.727043       1 server.go:527] "Version info" version="v1.34.2"
	I1206 09:35:14.727073       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:35:14.735094       1 config.go:200] "Starting service config controller"
	I1206 09:35:14.735126       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 09:35:14.735145       1 config.go:106] "Starting endpoint slice config controller"
	I1206 09:35:14.735148       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 09:35:14.735158       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 09:35:14.735161       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 09:35:14.735378       1 config.go:309] "Starting node config controller"
	I1206 09:35:14.735490       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 09:35:14.835622       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1206 09:35:14.835664       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1206 09:35:14.835692       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1206 09:35:14.836215       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-proxy [fcae0ad6f528493f7d768a91eb0b5640ab811469bd4881b8d266de4c65a1530e] <==
	I1206 09:33:42.156911       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1206 09:33:42.267253       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1206 09:33:42.267965       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.50.157"]
	E1206 09:33:42.269483       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 09:33:42.370424       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1206 09:33:42.372452       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1206 09:33:42.372500       1 server_linux.go:132] "Using iptables Proxier"
	I1206 09:33:42.387022       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 09:33:42.387369       1 server.go:527] "Version info" version="v1.34.2"
	I1206 09:33:42.387399       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:33:42.392992       1 config.go:200] "Starting service config controller"
	I1206 09:33:42.393038       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 09:33:42.393065       1 config.go:106] "Starting endpoint slice config controller"
	I1206 09:33:42.393080       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 09:33:42.393107       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 09:33:42.393121       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 09:33:42.393756       1 config.go:309] "Starting node config controller"
	I1206 09:33:42.393854       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 09:33:42.393945       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 09:33:42.494190       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1206 09:33:42.494242       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1206 09:33:42.494342       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [1762c3811ed0921bcedf9f48e0729e79858afdbc1a0c39a6db287901e95b83f5] <==
	E1206 09:33:33.438191       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1206 09:33:34.310259       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1206 09:33:34.323422       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1206 09:33:34.346800       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1206 09:33:34.376819       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1206 09:33:34.408359       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1206 09:33:34.501234       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1206 09:33:34.530785       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1206 09:33:34.530883       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1206 09:33:34.654952       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1206 09:33:34.660627       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1206 09:33:34.692947       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1206 09:33:34.767489       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1206 09:33:34.818665       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1206 09:33:34.858753       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1206 09:33:34.859461       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1206 09:33:34.881493       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1206 09:33:34.940476       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1206 09:33:37.819782       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 09:34:31.935466       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1206 09:34:31.935519       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1206 09:34:31.935539       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1206 09:34:31.935577       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 09:34:31.935816       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1206 09:34:31.935859       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [1eb8eccbfdd039b2a3ca7a044ee9dd43890db617eee8796dcbb720d46d9ae8f0] <==
	E1206 09:35:11.092894       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.50.157:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.50.157:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1206 09:35:11.092952       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.50.157:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.50.157:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1206 09:35:11.093057       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.50.157:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.50.157:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1206 09:35:11.093087       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.50.157:8443/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.50.157:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1206 09:35:11.093253       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.50.157:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.50.157:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1206 09:35:13.671378       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1206 09:35:13.673537       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1206 09:35:13.673665       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1206 09:35:13.673710       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1206 09:35:13.673754       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1206 09:35:13.673792       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1206 09:35:13.673828       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1206 09:35:13.673874       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1206 09:35:13.673911       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1206 09:35:13.673945       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1206 09:35:13.673980       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1206 09:35:13.674008       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1206 09:35:13.685771       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1206 09:35:13.686148       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1206 09:35:13.686467       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1206 09:35:13.686731       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1206 09:35:13.686931       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1206 09:35:13.687151       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1206 09:35:13.703501       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1206 09:35:15.788914       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 06 09:35:11 pause-272844 kubelet[3392]: E1206 09:35:11.585124    3392 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.50.157:8443/api/v1/nodes/pause-272844\": dial tcp 192.168.50.157:8443: connect: connection refused"
	Dec 06 09:35:12 pause-272844 kubelet[3392]: E1206 09:35:12.392868    3392 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-272844\" not found" node="pause-272844"
	Dec 06 09:35:12 pause-272844 kubelet[3392]: E1206 09:35:12.400927    3392 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-272844\" not found" node="pause-272844"
	Dec 06 09:35:12 pause-272844 kubelet[3392]: E1206 09:35:12.402147    3392 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-272844\" not found" node="pause-272844"
	Dec 06 09:35:12 pause-272844 kubelet[3392]: I1206 09:35:12.884381    3392 kubelet_node_status.go:75] "Attempting to register node" node="pause-272844"
	Dec 06 09:35:13 pause-272844 kubelet[3392]: E1206 09:35:13.405620    3392 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-272844\" not found" node="pause-272844"
	Dec 06 09:35:13 pause-272844 kubelet[3392]: E1206 09:35:13.405991    3392 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-272844\" not found" node="pause-272844"
	Dec 06 09:35:13 pause-272844 kubelet[3392]: E1206 09:35:13.406200    3392 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-272844\" not found" node="pause-272844"
	Dec 06 09:35:13 pause-272844 kubelet[3392]: I1206 09:35:13.778611    3392 kubelet_node_status.go:124] "Node was previously registered" node="pause-272844"
	Dec 06 09:35:13 pause-272844 kubelet[3392]: I1206 09:35:13.778701    3392 kubelet_node_status.go:78] "Successfully registered node" node="pause-272844"
	Dec 06 09:35:13 pause-272844 kubelet[3392]: E1206 09:35:13.778722    3392 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"pause-272844\": node \"pause-272844\" not found"
	Dec 06 09:35:13 pause-272844 kubelet[3392]: I1206 09:35:13.781640    3392 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 06 09:35:13 pause-272844 kubelet[3392]: I1206 09:35:13.783130    3392 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 06 09:35:13 pause-272844 kubelet[3392]: E1206 09:35:13.796954    3392 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"pause-272844\" not found"
	Dec 06 09:35:13 pause-272844 kubelet[3392]: E1206 09:35:13.897164    3392 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"pause-272844\" not found"
	Dec 06 09:35:13 pause-272844 kubelet[3392]: E1206 09:35:13.998338    3392 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"pause-272844\" not found"
	Dec 06 09:35:14 pause-272844 kubelet[3392]: I1206 09:35:14.057972    3392 apiserver.go:52] "Watching apiserver"
	Dec 06 09:35:14 pause-272844 kubelet[3392]: I1206 09:35:14.105434    3392 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 06 09:35:14 pause-272844 kubelet[3392]: I1206 09:35:14.197373    3392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3359dd42-379a-4c92-8146-465d519ab5a1-lib-modules\") pod \"kube-proxy-6p7rh\" (UID: \"3359dd42-379a-4c92-8146-465d519ab5a1\") " pod="kube-system/kube-proxy-6p7rh"
	Dec 06 09:35:14 pause-272844 kubelet[3392]: I1206 09:35:14.197446    3392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3359dd42-379a-4c92-8146-465d519ab5a1-xtables-lock\") pod \"kube-proxy-6p7rh\" (UID: \"3359dd42-379a-4c92-8146-465d519ab5a1\") " pod="kube-system/kube-proxy-6p7rh"
	Dec 06 09:35:14 pause-272844 kubelet[3392]: I1206 09:35:14.363535    3392 scope.go:117] "RemoveContainer" containerID="fcae0ad6f528493f7d768a91eb0b5640ab811469bd4881b8d266de4c65a1530e"
	Dec 06 09:35:14 pause-272844 kubelet[3392]: I1206 09:35:14.433195    3392 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-272844"
	Dec 06 09:35:14 pause-272844 kubelet[3392]: E1206 09:35:14.464443    3392 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-272844\" already exists" pod="kube-system/kube-apiserver-pause-272844"
	Dec 06 09:35:21 pause-272844 kubelet[3392]: E1206 09:35:21.294668    3392 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765013721292177214 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:124341} inodes_used:{value:48}}"
	Dec 06 09:35:21 pause-272844 kubelet[3392]: E1206 09:35:21.294758    3392 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765013721292177214 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:124341} inodes_used:{value:48}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-272844 -n pause-272844
helpers_test.go:269: (dbg) Run:  kubectl --context pause-272844 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-272844 -n pause-272844
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-272844 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-272844 logs -n 25: (1.169127272s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                            ARGS                                                                             │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-920584 sudo systemctl cat docker --no-pager                                                                                                       │ cilium-920584             │ jenkins │ v1.37.0 │ 06 Dec 25 09:34 UTC │                     │
	│ ssh     │ -p cilium-920584 sudo cat /etc/docker/daemon.json                                                                                                           │ cilium-920584             │ jenkins │ v1.37.0 │ 06 Dec 25 09:34 UTC │                     │
	│ ssh     │ -p cilium-920584 sudo docker system info                                                                                                                    │ cilium-920584             │ jenkins │ v1.37.0 │ 06 Dec 25 09:34 UTC │                     │
	│ ssh     │ -p cilium-920584 sudo systemctl status cri-docker --all --full --no-pager                                                                                   │ cilium-920584             │ jenkins │ v1.37.0 │ 06 Dec 25 09:34 UTC │                     │
	│ ssh     │ -p cilium-920584 sudo systemctl cat cri-docker --no-pager                                                                                                   │ cilium-920584             │ jenkins │ v1.37.0 │ 06 Dec 25 09:34 UTC │                     │
	│ ssh     │ -p cilium-920584 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                              │ cilium-920584             │ jenkins │ v1.37.0 │ 06 Dec 25 09:34 UTC │                     │
	│ ssh     │ -p cilium-920584 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                        │ cilium-920584             │ jenkins │ v1.37.0 │ 06 Dec 25 09:34 UTC │                     │
	│ ssh     │ -p cilium-920584 sudo cri-dockerd --version                                                                                                                 │ cilium-920584             │ jenkins │ v1.37.0 │ 06 Dec 25 09:34 UTC │                     │
	│ ssh     │ -p cilium-920584 sudo systemctl status containerd --all --full --no-pager                                                                                   │ cilium-920584             │ jenkins │ v1.37.0 │ 06 Dec 25 09:34 UTC │                     │
	│ ssh     │ -p cilium-920584 sudo systemctl cat containerd --no-pager                                                                                                   │ cilium-920584             │ jenkins │ v1.37.0 │ 06 Dec 25 09:34 UTC │                     │
	│ ssh     │ -p cilium-920584 sudo cat /lib/systemd/system/containerd.service                                                                                            │ cilium-920584             │ jenkins │ v1.37.0 │ 06 Dec 25 09:34 UTC │                     │
	│ ssh     │ -p cilium-920584 sudo cat /etc/containerd/config.toml                                                                                                       │ cilium-920584             │ jenkins │ v1.37.0 │ 06 Dec 25 09:34 UTC │                     │
	│ ssh     │ -p cilium-920584 sudo containerd config dump                                                                                                                │ cilium-920584             │ jenkins │ v1.37.0 │ 06 Dec 25 09:34 UTC │                     │
	│ ssh     │ -p cilium-920584 sudo systemctl status crio --all --full --no-pager                                                                                         │ cilium-920584             │ jenkins │ v1.37.0 │ 06 Dec 25 09:34 UTC │                     │
	│ ssh     │ -p cilium-920584 sudo systemctl cat crio --no-pager                                                                                                         │ cilium-920584             │ jenkins │ v1.37.0 │ 06 Dec 25 09:34 UTC │                     │
	│ ssh     │ -p cilium-920584 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                               │ cilium-920584             │ jenkins │ v1.37.0 │ 06 Dec 25 09:34 UTC │                     │
	│ ssh     │ -p cilium-920584 sudo crio config                                                                                                                           │ cilium-920584             │ jenkins │ v1.37.0 │ 06 Dec 25 09:34 UTC │                     │
	│ delete  │ -p cilium-920584                                                                                                                                            │ cilium-920584             │ jenkins │ v1.37.0 │ 06 Dec 25 09:34 UTC │ 06 Dec 25 09:34 UTC │
	│ start   │ -p guest-688206 --no-kubernetes --driver=kvm2  --container-runtime=crio                                                                                     │ guest-688206              │ jenkins │ v1.37.0 │ 06 Dec 25 09:34 UTC │ 06 Dec 25 09:35 UTC │
	│ ssh     │ -p NoKubernetes-030154 sudo systemctl is-active --quiet service kubelet                                                                                     │ NoKubernetes-030154       │ jenkins │ v1.37.0 │ 06 Dec 25 09:34 UTC │                     │
	│ mount   │ /home/jenkins:/minikube-host --profile running-upgrade-044478 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker │ running-upgrade-044478    │ jenkins │ v1.37.0 │ 06 Dec 25 09:35 UTC │                     │
	│ delete  │ -p running-upgrade-044478                                                                                                                                   │ running-upgrade-044478    │ jenkins │ v1.37.0 │ 06 Dec 25 09:35 UTC │ 06 Dec 25 09:35 UTC │
	│ start   │ -p kubernetes-upgrade-460997 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                      │ kubernetes-upgrade-460997 │ jenkins │ v1.37.0 │ 06 Dec 25 09:35 UTC │                     │
	│ stop    │ -p NoKubernetes-030154                                                                                                                                      │ NoKubernetes-030154       │ jenkins │ v1.37.0 │ 06 Dec 25 09:35 UTC │ 06 Dec 25 09:35 UTC │
	│ start   │ -p NoKubernetes-030154 --driver=kvm2  --container-runtime=crio                                                                                              │ NoKubernetes-030154       │ jenkins │ v1.37.0 │ 06 Dec 25 09:35 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:35:22
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:35:22.774553   42128 out.go:345] Setting OutFile to fd 1 ...
	I1206 09:35:22.774910   42128 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1206 09:35:22.774914   42128 out.go:358] Setting ErrFile to fd 2...
	I1206 09:35:22.774918   42128 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1206 09:35:22.775110   42128 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5603/.minikube/bin
	I1206 09:35:22.775670   42128 out.go:352] Setting JSON to false
	I1206 09:35:22.776580   42128 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":4665,"bootTime":1765009058,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:35:22.776644   42128 start.go:139] virtualization: kvm guest
	I1206 09:35:22.778312   42128 out.go:177] * [stopped-upgrade-295047] minikube v1.35.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 09:35:22.779666   42128 out.go:177]   - MINIKUBE_LOCATION=22049
	I1206 09:35:22.779669   42128 notify.go:220] Checking for updates...
	I1206 09:35:22.780813   42128 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:35:22.782827   42128 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-5603/.minikube
	I1206 09:35:22.783770   42128 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:35:22.785022   42128 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:35:22.789667   42128 out.go:177]   - KUBECONFIG=/tmp/legacy_kubeconfig1995796069
	I1206 09:35:22.791382   42128 config.go:182] Loaded profile config "NoKubernetes-030154": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1206 09:35:22.791544   42128 config.go:182] Loaded profile config "guest-688206": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1206 09:35:22.791664   42128 config.go:182] Loaded profile config "kubernetes-upgrade-460997": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1206 09:35:22.791842   42128 config.go:182] Loaded profile config "pause-272844": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:35:22.792038   42128 driver.go:394] Setting default libvirt URI to qemu:///system
	I1206 09:35:22.833954   42128 out.go:177] * Using the kvm2 driver based on user configuration
	I1206 09:35:22.835025   42128 start.go:297] selected driver: kvm2
	I1206 09:35:22.835034   42128 start.go:901] validating driver "kvm2" against <nil>
	I1206 09:35:22.835048   42128 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:35:22.836153   42128 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:35:22.836258   42128 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/22049-5603/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1206 09:35:22.854235   42128 install.go:137] /usr/local/bin/docker-machine-driver-kvm2 version is 1.37.0
	I1206 09:35:22.854290   42128 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1206 09:35:22.854635   42128 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1206 09:35:22.854663   42128 cni.go:84] Creating CNI manager for ""
	I1206 09:35:22.854726   42128 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 09:35:22.854740   42128 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1206 09:35:22.854813   42128 start.go:340] cluster config:
	{Name:stopped-upgrade-295047 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:stopped-upgrade-295047 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:35:22.854937   42128 iso.go:125] acquiring lock: {Name:mke799450cff815011aad774a819eea4fb856d3c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:35:22.856381   42128 out.go:177] * Starting "stopped-upgrade-295047" primary control-plane node in "stopped-upgrade-295047" cluster
	I1206 09:35:19.555028   41834 main.go:143] libmachine: waiting for domain to start...
	I1206 09:35:19.556415   41834 main.go:143] libmachine: domain is now running
	I1206 09:35:19.556428   41834 main.go:143] libmachine: waiting for IP...
	I1206 09:35:19.557711   41834 main.go:143] libmachine: domain kubernetes-upgrade-460997 has defined MAC address 52:54:00:a4:e7:ea in network mk-kubernetes-upgrade-460997
	I1206 09:35:19.558610   41834 main.go:143] libmachine: no network interface addresses found for domain kubernetes-upgrade-460997 (source=lease)
	I1206 09:35:19.558633   41834 main.go:143] libmachine: trying to list again with source=arp
	I1206 09:35:19.559009   41834 main.go:143] libmachine: unable to find current IP address of domain kubernetes-upgrade-460997 in network mk-kubernetes-upgrade-460997 (interfaces detected: [])
	I1206 09:35:19.559048   41834 retry.go:31] will retry after 285.360314ms: waiting for domain to come up
	I1206 09:35:19.846519   41834 main.go:143] libmachine: domain kubernetes-upgrade-460997 has defined MAC address 52:54:00:a4:e7:ea in network mk-kubernetes-upgrade-460997
	I1206 09:35:19.847161   41834 main.go:143] libmachine: no network interface addresses found for domain kubernetes-upgrade-460997 (source=lease)
	I1206 09:35:19.847176   41834 main.go:143] libmachine: trying to list again with source=arp
	I1206 09:35:19.847534   41834 main.go:143] libmachine: unable to find current IP address of domain kubernetes-upgrade-460997 in network mk-kubernetes-upgrade-460997 (interfaces detected: [])
	I1206 09:35:19.847583   41834 retry.go:31] will retry after 381.845751ms: waiting for domain to come up
	I1206 09:35:20.231390   41834 main.go:143] libmachine: domain kubernetes-upgrade-460997 has defined MAC address 52:54:00:a4:e7:ea in network mk-kubernetes-upgrade-460997
	I1206 09:35:20.232118   41834 main.go:143] libmachine: no network interface addresses found for domain kubernetes-upgrade-460997 (source=lease)
	I1206 09:35:20.232151   41834 main.go:143] libmachine: trying to list again with source=arp
	I1206 09:35:20.232550   41834 main.go:143] libmachine: unable to find current IP address of domain kubernetes-upgrade-460997 in network mk-kubernetes-upgrade-460997 (interfaces detected: [])
	I1206 09:35:20.232588   41834 retry.go:31] will retry after 378.485099ms: waiting for domain to come up
	I1206 09:35:20.613030   41834 main.go:143] libmachine: domain kubernetes-upgrade-460997 has defined MAC address 52:54:00:a4:e7:ea in network mk-kubernetes-upgrade-460997
	I1206 09:35:20.613695   41834 main.go:143] libmachine: no network interface addresses found for domain kubernetes-upgrade-460997 (source=lease)
	I1206 09:35:20.613719   41834 main.go:143] libmachine: trying to list again with source=arp
	I1206 09:35:20.614038   41834 main.go:143] libmachine: unable to find current IP address of domain kubernetes-upgrade-460997 in network mk-kubernetes-upgrade-460997 (interfaces detected: [])
	I1206 09:35:20.614078   41834 retry.go:31] will retry after 401.253412ms: waiting for domain to come up
	I1206 09:35:21.016487   41834 main.go:143] libmachine: domain kubernetes-upgrade-460997 has defined MAC address 52:54:00:a4:e7:ea in network mk-kubernetes-upgrade-460997
	I1206 09:35:21.017014   41834 main.go:143] libmachine: no network interface addresses found for domain kubernetes-upgrade-460997 (source=lease)
	I1206 09:35:21.017034   41834 main.go:143] libmachine: trying to list again with source=arp
	I1206 09:35:21.017303   41834 main.go:143] libmachine: unable to find current IP address of domain kubernetes-upgrade-460997 in network mk-kubernetes-upgrade-460997 (interfaces detected: [])
	I1206 09:35:21.017338   41834 retry.go:31] will retry after 463.564247ms: waiting for domain to come up
	I1206 09:35:21.483733   41834 main.go:143] libmachine: domain kubernetes-upgrade-460997 has defined MAC address 52:54:00:a4:e7:ea in network mk-kubernetes-upgrade-460997
	I1206 09:35:21.484667   41834 main.go:143] libmachine: no network interface addresses found for domain kubernetes-upgrade-460997 (source=lease)
	I1206 09:35:21.484682   41834 main.go:143] libmachine: trying to list again with source=arp
	I1206 09:35:21.485105   41834 main.go:143] libmachine: unable to find current IP address of domain kubernetes-upgrade-460997 in network mk-kubernetes-upgrade-460997 (interfaces detected: [])
	I1206 09:35:21.485141   41834 retry.go:31] will retry after 669.242922ms: waiting for domain to come up
	I1206 09:35:22.156022   41834 main.go:143] libmachine: domain kubernetes-upgrade-460997 has defined MAC address 52:54:00:a4:e7:ea in network mk-kubernetes-upgrade-460997
	I1206 09:35:22.156694   41834 main.go:143] libmachine: no network interface addresses found for domain kubernetes-upgrade-460997 (source=lease)
	I1206 09:35:22.156713   41834 main.go:143] libmachine: trying to list again with source=arp
	I1206 09:35:22.157030   41834 main.go:143] libmachine: unable to find current IP address of domain kubernetes-upgrade-460997 in network mk-kubernetes-upgrade-460997 (interfaces detected: [])
	I1206 09:35:22.157065   41834 retry.go:31] will retry after 966.018369ms: waiting for domain to come up
	I1206 09:35:23.124059   41834 main.go:143] libmachine: domain kubernetes-upgrade-460997 has defined MAC address 52:54:00:a4:e7:ea in network mk-kubernetes-upgrade-460997
	I1206 09:35:23.124671   41834 main.go:143] libmachine: no network interface addresses found for domain kubernetes-upgrade-460997 (source=lease)
	I1206 09:35:23.124689   41834 main.go:143] libmachine: trying to list again with source=arp
	I1206 09:35:23.125032   41834 main.go:143] libmachine: unable to find current IP address of domain kubernetes-upgrade-460997 in network mk-kubernetes-upgrade-460997 (interfaces detected: [])
	I1206 09:35:23.125067   41834 retry.go:31] will retry after 1.049607338s: waiting for domain to come up
	I1206 09:35:24.176333   41834 main.go:143] libmachine: domain kubernetes-upgrade-460997 has defined MAC address 52:54:00:a4:e7:ea in network mk-kubernetes-upgrade-460997
	I1206 09:35:24.176981   41834 main.go:143] libmachine: no network interface addresses found for domain kubernetes-upgrade-460997 (source=lease)
	I1206 09:35:24.177010   41834 main.go:143] libmachine: trying to list again with source=arp
	I1206 09:35:24.177326   41834 main.go:143] libmachine: unable to find current IP address of domain kubernetes-upgrade-460997 in network mk-kubernetes-upgrade-460997 (interfaces detected: [])
	I1206 09:35:24.177359   41834 retry.go:31] will retry after 1.804484842s: waiting for domain to come up
	
	
	==> CRI-O <==
	Dec 06 09:35:25 pause-272844 crio[2547]: time="2025-12-06 09:35:25.105838347Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=701bbcfd-7aed-4467-bf36-75a1f3d727d9 name=/runtime.v1.RuntimeService/Version
	Dec 06 09:35:25 pause-272844 crio[2547]: time="2025-12-06 09:35:25.107235791Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a578e676-7476-4043-992c-6b6bdf3cf486 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 09:35:25 pause-272844 crio[2547]: time="2025-12-06 09:35:25.107618823Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765013725107597490,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a578e676-7476-4043-992c-6b6bdf3cf486 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 09:35:25 pause-272844 crio[2547]: time="2025-12-06 09:35:25.108460739Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=236d9891-e6ba-473c-b9c3-e31f526133b0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 09:35:25 pause-272844 crio[2547]: time="2025-12-06 09:35:25.108597036Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=236d9891-e6ba-473c-b9c3-e31f526133b0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 09:35:25 pause-272844 crio[2547]: time="2025-12-06 09:35:25.109128797Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a70e008e737097382ad5048e14a95d0660903ecec9ee382a96cb326719634b46,PodSandboxId:5834180a255d891045101e0eb52187a9341281c9113b9ebccad68434484836fc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765013714396067216,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-28k4p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59d8afc1-3dcd-4ae7-90a5-6b67e8c2020e,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16b2dde50f602c8157807f55eacc428d762585966948b24ce349777091192b33,PodSandboxId:e5631ab7f2e34758e2270eeacf9c8d926d4fe19b1f171675ac6f14adf0f840f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765013711389491853,Labels
:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-272844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c18b9e1c337242b08043a2d5c75e23b,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac38eff16829fd0d0c91b1838fbe997f15b3aa367eb91ba85cca70d034aecd8d,PodSandboxId:98d027bab3f8c546bb654b3f856d32f066e128ce4e35fc013a7ef6b0c8c8bf89,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5
f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765013711400733135,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-272844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00fe05956219890e3f95be82c81de0e4,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:535ceb9ede5a055482ac920e65bf34bc906e333493f3f6928c9f65b1d1661475,PodSandboxId:645ad8db890e1ea94c609d08bd7247aab63d46c1051a68720d536baa65da00a9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e
86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765013688520007585,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6p7rh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3359dd42-379a-4c92-8146-465d519ab5a1,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16ff8a41ee03d97e00f64bac7e8d01b02620206dc18cbb1f992a1324a3d2808e,PodSandboxId:a85f57a2fabeb6456fd4bc6400ab08393365a6a7ecaecea92c9ada1aba24d403,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765013688425044767,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-272844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 964525ffd08fbde6270e57ecf995d051,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eb8eccbfdd039b2a3ca7a044ee9dd43890db617eee8796dcbb720d46d9ae8f0,PodSandboxId:750d6485343688a501e19d468be5d2f88ae6845eac5fcb3798b33b53bbbfdbaf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:883
20b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765013688319425897,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-272844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 121786fb98873f9b0be41b77da0f836c,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9eefcf9c509f993cb4916b932a1febe3c6c001d4f4bac7e9d02e859875d6d5c,PodSandboxId:e5631ab7f2e34758e2270eeacf9c8d926d4fe19b1f1716
75ac6f14adf0f840f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_EXITED,CreatedAt:1765013688295983402,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-272844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c18b9e1c337242b08043a2d5c75e23b,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contai
ner{Id:17070db2f8f2dd0a499909c06c4ac4c330a2ecc3d4fa881f7f4467d0a1f56638,PodSandboxId:98d027bab3f8c546bb654b3f856d32f066e128ce4e35fc013a7ef6b0c8c8bf89,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_EXITED,CreatedAt:1765013688244556166,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-272844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00fe05956219890e3f95be82c81de0e4,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cb81cd535b67d209a36ac0ef24eec735d25bcb5e170e0383cbe6939107cc90a,PodSandboxId:b8a1c890443c703b96b1d7c10a843e0ee1a59911f8e167a33563e3be9aefd6d6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765013622375691626,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-28k4p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59d8afc1-3dcd-4ae7-90a5-6b67e8c2020e,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"
TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcae0ad6f528493f7d768a91eb0b5640ab811469bd4881b8d266de4c65a1530e,PodSandboxId:dc5c8b2bf48b1fbeeccf7a7ca3baa9698f9d337ff0403eeaed1b38a42535142b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_EXITED,CreatedAt:1765013621856200358,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name:
kube-proxy-6p7rh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3359dd42-379a-4c92-8146-465d519ab5a1,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:906109c794a06106da1a38cc94530358d3a6b3fa0ebdb8adbb8293f6959a69ab,PodSandboxId:67a78dca14e2ac39f52d6af8b15998bbaacc5b29450a8435bd1544f4e6a29083,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765013610187902668,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-272844,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 964525ffd08fbde6270e57ecf995d051,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1762c3811ed0921bcedf9f48e0729e79858afdbc1a0c39a6db287901e95b83f5,PodSandboxId:02407ed82ca36063f82492c57241580e7555a79fef4b6b8d79716c4e05408d39,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_EXITED,CreatedAt:1765013610096835460,Labels:map[string]string{io.kubernetes.containe
r.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-272844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 121786fb98873f9b0be41b77da0f836c,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=236d9891-e6ba-473c-b9c3-e31f526133b0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 09:35:25 pause-272844 crio[2547]: time="2025-12-06 09:35:25.145537219Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0aeadbd6-3dbd-49fb-a3c9-234cac52e9f3 name=/runtime.v1.RuntimeService/Version
	Dec 06 09:35:25 pause-272844 crio[2547]: time="2025-12-06 09:35:25.145625049Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0aeadbd6-3dbd-49fb-a3c9-234cac52e9f3 name=/runtime.v1.RuntimeService/Version
	Dec 06 09:35:25 pause-272844 crio[2547]: time="2025-12-06 09:35:25.147126304Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=afc34346-8995-422d-b038-22ae12b0b3c8 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 09:35:25 pause-272844 crio[2547]: time="2025-12-06 09:35:25.147611979Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765013725147586183,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=afc34346-8995-422d-b038-22ae12b0b3c8 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 09:35:25 pause-272844 crio[2547]: time="2025-12-06 09:35:25.148635768Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e99a6a8d-97e5-4054-9225-a174c2ad291b name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 09:35:25 pause-272844 crio[2547]: time="2025-12-06 09:35:25.148736674Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e99a6a8d-97e5-4054-9225-a174c2ad291b name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 09:35:25 pause-272844 crio[2547]: time="2025-12-06 09:35:25.149778744Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a70e008e737097382ad5048e14a95d0660903ecec9ee382a96cb326719634b46,PodSandboxId:5834180a255d891045101e0eb52187a9341281c9113b9ebccad68434484836fc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765013714396067216,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-28k4p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59d8afc1-3dcd-4ae7-90a5-6b67e8c2020e,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16b2dde50f602c8157807f55eacc428d762585966948b24ce349777091192b33,PodSandboxId:e5631ab7f2e34758e2270eeacf9c8d926d4fe19b1f171675ac6f14adf0f840f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765013711389491853,Labels
:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-272844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c18b9e1c337242b08043a2d5c75e23b,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac38eff16829fd0d0c91b1838fbe997f15b3aa367eb91ba85cca70d034aecd8d,PodSandboxId:98d027bab3f8c546bb654b3f856d32f066e128ce4e35fc013a7ef6b0c8c8bf89,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5
f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765013711400733135,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-272844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00fe05956219890e3f95be82c81de0e4,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:535ceb9ede5a055482ac920e65bf34bc906e333493f3f6928c9f65b1d1661475,PodSandboxId:645ad8db890e1ea94c609d08bd7247aab63d46c1051a68720d536baa65da00a9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e
86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765013688520007585,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6p7rh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3359dd42-379a-4c92-8146-465d519ab5a1,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16ff8a41ee03d97e00f64bac7e8d01b02620206dc18cbb1f992a1324a3d2808e,PodSandboxId:a85f57a2fabeb6456fd4bc6400ab08393365a6a7ecaecea92c9ada1aba24d403,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765013688425044767,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-272844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 964525ffd08fbde6270e57ecf995d051,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eb8eccbfdd039b2a3ca7a044ee9dd43890db617eee8796dcbb720d46d9ae8f0,PodSandboxId:750d6485343688a501e19d468be5d2f88ae6845eac5fcb3798b33b53bbbfdbaf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:883
20b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765013688319425897,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-272844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 121786fb98873f9b0be41b77da0f836c,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9eefcf9c509f993cb4916b932a1febe3c6c001d4f4bac7e9d02e859875d6d5c,PodSandboxId:e5631ab7f2e34758e2270eeacf9c8d926d4fe19b1f1716
75ac6f14adf0f840f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_EXITED,CreatedAt:1765013688295983402,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-272844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c18b9e1c337242b08043a2d5c75e23b,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contai
ner{Id:17070db2f8f2dd0a499909c06c4ac4c330a2ecc3d4fa881f7f4467d0a1f56638,PodSandboxId:98d027bab3f8c546bb654b3f856d32f066e128ce4e35fc013a7ef6b0c8c8bf89,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_EXITED,CreatedAt:1765013688244556166,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-272844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00fe05956219890e3f95be82c81de0e4,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cb81cd535b67d209a36ac0ef24eec735d25bcb5e170e0383cbe6939107cc90a,PodSandboxId:b8a1c890443c703b96b1d7c10a843e0ee1a59911f8e167a33563e3be9aefd6d6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765013622375691626,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-28k4p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59d8afc1-3dcd-4ae7-90a5-6b67e8c2020e,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"
TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcae0ad6f528493f7d768a91eb0b5640ab811469bd4881b8d266de4c65a1530e,PodSandboxId:dc5c8b2bf48b1fbeeccf7a7ca3baa9698f9d337ff0403eeaed1b38a42535142b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_EXITED,CreatedAt:1765013621856200358,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name:
kube-proxy-6p7rh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3359dd42-379a-4c92-8146-465d519ab5a1,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:906109c794a06106da1a38cc94530358d3a6b3fa0ebdb8adbb8293f6959a69ab,PodSandboxId:67a78dca14e2ac39f52d6af8b15998bbaacc5b29450a8435bd1544f4e6a29083,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765013610187902668,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-272844,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 964525ffd08fbde6270e57ecf995d051,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1762c3811ed0921bcedf9f48e0729e79858afdbc1a0c39a6db287901e95b83f5,PodSandboxId:02407ed82ca36063f82492c57241580e7555a79fef4b6b8d79716c4e05408d39,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_EXITED,CreatedAt:1765013610096835460,Labels:map[string]string{io.kubernetes.containe
r.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-272844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 121786fb98873f9b0be41b77da0f836c,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e99a6a8d-97e5-4054-9225-a174c2ad291b name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 09:35:25 pause-272844 crio[2547]: time="2025-12-06 09:35:25.163049201Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3f1778e9-3615-41fc-a241-97d7e3c3d61c name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 06 09:35:25 pause-272844 crio[2547]: time="2025-12-06 09:35:25.163251056Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:750d6485343688a501e19d468be5d2f88ae6845eac5fcb3798b33b53bbbfdbaf,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-272844,Uid:121786fb98873f9b0be41b77da0f836c,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1765013687788745972,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-272844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 121786fb98873f9b0be41b77da0f836c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 121786fb98873f9b0be41b77da0f836c,kubernetes.io/config.seen: 2025-12-06T09:33:36.105574397Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:645ad8db890e1ea94c609d08bd7247aab63d46c1051a68720d536baa65da00a9,Metadata:&PodSandboxMetadata{Name:kube-proxy-6p7rh,Uid:3359dd42-
379a-4c92-8146-465d519ab5a1,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1765013687787654018,Labels:map[string]string{controller-revision-hash: 66d5f8d6f6,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-6p7rh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3359dd42-379a-4c92-8146-465d519ab5a1,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-06T09:33:41.362630160Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:98d027bab3f8c546bb654b3f856d32f066e128ce4e35fc013a7ef6b0c8c8bf89,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-272844,Uid:00fe05956219890e3f95be82c81de0e4,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1765013687779095471,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-272844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00fe05956219890e3f95
be82c81de0e4,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.157:8443,kubernetes.io/config.hash: 00fe05956219890e3f95be82c81de0e4,kubernetes.io/config.seen: 2025-12-06T09:33:36.105563705Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5834180a255d891045101e0eb52187a9341281c9113b9ebccad68434484836fc,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-28k4p,Uid:59d8afc1-3dcd-4ae7-90a5-6b67e8c2020e,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1765013687778641469,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-28k4p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59d8afc1-3dcd-4ae7-90a5-6b67e8c2020e,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-06T09:33:41.617013391Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a85f57a2fabeb6456fd4bc6
400ab08393365a6a7ecaecea92c9ada1aba24d403,Metadata:&PodSandboxMetadata{Name:etcd-pause-272844,Uid:964525ffd08fbde6270e57ecf995d051,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1765013687767036420,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-272844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 964525ffd08fbde6270e57ecf995d051,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.157:2379,kubernetes.io/config.hash: 964525ffd08fbde6270e57ecf995d051,kubernetes.io/config.seen: 2025-12-06T09:33:36.105575250Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e5631ab7f2e34758e2270eeacf9c8d926d4fe19b1f171675ac6f14adf0f840f4,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-272844,Uid:7c18b9e1c337242b08043a2d5c75e23b,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1765013687755877065,Labels:map[string]str
ing{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-272844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c18b9e1c337242b08043a2d5c75e23b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 7c18b9e1c337242b08043a2d5c75e23b,kubernetes.io/config.seen: 2025-12-06T09:33:36.105573467Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=3f1778e9-3615-41fc-a241-97d7e3c3d61c name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 06 09:35:25 pause-272844 crio[2547]: time="2025-12-06 09:35:25.165870987Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e2a5c9c1-2639-46ce-9210-97d7c5b0d196 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 09:35:25 pause-272844 crio[2547]: time="2025-12-06 09:35:25.166096341Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e2a5c9c1-2639-46ce-9210-97d7c5b0d196 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 09:35:25 pause-272844 crio[2547]: time="2025-12-06 09:35:25.166596557Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a70e008e737097382ad5048e14a95d0660903ecec9ee382a96cb326719634b46,PodSandboxId:5834180a255d891045101e0eb52187a9341281c9113b9ebccad68434484836fc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765013714396067216,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-28k4p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59d8afc1-3dcd-4ae7-90a5-6b67e8c2020e,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16b2dde50f602c8157807f55eacc428d762585966948b24ce349777091192b33,PodSandboxId:e5631ab7f2e34758e2270eeacf9c8d926d4fe19b1f171675ac6f14adf0f840f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765013711389491853,Labels
:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-272844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c18b9e1c337242b08043a2d5c75e23b,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac38eff16829fd0d0c91b1838fbe997f15b3aa367eb91ba85cca70d034aecd8d,PodSandboxId:98d027bab3f8c546bb654b3f856d32f066e128ce4e35fc013a7ef6b0c8c8bf89,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5
f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765013711400733135,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-272844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00fe05956219890e3f95be82c81de0e4,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:535ceb9ede5a055482ac920e65bf34bc906e333493f3f6928c9f65b1d1661475,PodSandboxId:645ad8db890e1ea94c609d08bd7247aab63d46c1051a68720d536baa65da00a9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e
86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765013688520007585,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6p7rh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3359dd42-379a-4c92-8146-465d519ab5a1,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16ff8a41ee03d97e00f64bac7e8d01b02620206dc18cbb1f992a1324a3d2808e,PodSandboxId:a85f57a2fabeb6456fd4bc6400ab08393365a6a7ecaecea92c9ada1aba24d403,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765013688425044767,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-272844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 964525ffd08fbde6270e57ecf995d051,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eb8eccbfdd039b2a3ca7a044ee9dd43890db617eee8796dcbb720d46d9ae8f0,PodSandboxId:750d6485343688a501e19d468be5d2f88ae6845eac5fcb3798b33b53bbbfdbaf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:883
20b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765013688319425897,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-272844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 121786fb98873f9b0be41b77da0f836c,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e2a5c9c1-2639-46ce-9210-97d7c5b0d196 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 09:35:25 pause-272844 crio[2547]: time="2025-12-06 09:35:25.189910989Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=15d15f16-5cf5-4133-998d-d429bbf06d2d name=/runtime.v1.RuntimeService/Version
	Dec 06 09:35:25 pause-272844 crio[2547]: time="2025-12-06 09:35:25.190228660Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=15d15f16-5cf5-4133-998d-d429bbf06d2d name=/runtime.v1.RuntimeService/Version
	Dec 06 09:35:25 pause-272844 crio[2547]: time="2025-12-06 09:35:25.192016213Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1fba08cf-a67d-4ce5-91eb-1fbc86234b3b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 09:35:25 pause-272844 crio[2547]: time="2025-12-06 09:35:25.192829452Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765013725192806088,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1fba08cf-a67d-4ce5-91eb-1fbc86234b3b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 09:35:25 pause-272844 crio[2547]: time="2025-12-06 09:35:25.193911294Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=722df5dd-cef0-49ff-8f93-b3a864548175 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 09:35:25 pause-272844 crio[2547]: time="2025-12-06 09:35:25.194073974Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=722df5dd-cef0-49ff-8f93-b3a864548175 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 09:35:25 pause-272844 crio[2547]: time="2025-12-06 09:35:25.194648229Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a70e008e737097382ad5048e14a95d0660903ecec9ee382a96cb326719634b46,PodSandboxId:5834180a255d891045101e0eb52187a9341281c9113b9ebccad68434484836fc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765013714396067216,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-28k4p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59d8afc1-3dcd-4ae7-90a5-6b67e8c2020e,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16b2dde50f602c8157807f55eacc428d762585966948b24ce349777091192b33,PodSandboxId:e5631ab7f2e34758e2270eeacf9c8d926d4fe19b1f171675ac6f14adf0f840f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765013711389491853,Labels
:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-272844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c18b9e1c337242b08043a2d5c75e23b,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac38eff16829fd0d0c91b1838fbe997f15b3aa367eb91ba85cca70d034aecd8d,PodSandboxId:98d027bab3f8c546bb654b3f856d32f066e128ce4e35fc013a7ef6b0c8c8bf89,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5
f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765013711400733135,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-272844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00fe05956219890e3f95be82c81de0e4,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:535ceb9ede5a055482ac920e65bf34bc906e333493f3f6928c9f65b1d1661475,PodSandboxId:645ad8db890e1ea94c609d08bd7247aab63d46c1051a68720d536baa65da00a9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e
86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765013688520007585,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6p7rh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3359dd42-379a-4c92-8146-465d519ab5a1,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16ff8a41ee03d97e00f64bac7e8d01b02620206dc18cbb1f992a1324a3d2808e,PodSandboxId:a85f57a2fabeb6456fd4bc6400ab08393365a6a7ecaecea92c9ada1aba24d403,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765013688425044767,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-272844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 964525ffd08fbde6270e57ecf995d051,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eb8eccbfdd039b2a3ca7a044ee9dd43890db617eee8796dcbb720d46d9ae8f0,PodSandboxId:750d6485343688a501e19d468be5d2f88ae6845eac5fcb3798b33b53bbbfdbaf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:883
20b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765013688319425897,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-272844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 121786fb98873f9b0be41b77da0f836c,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9eefcf9c509f993cb4916b932a1febe3c6c001d4f4bac7e9d02e859875d6d5c,PodSandboxId:e5631ab7f2e34758e2270eeacf9c8d926d4fe19b1f1716
75ac6f14adf0f840f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_EXITED,CreatedAt:1765013688295983402,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-272844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c18b9e1c337242b08043a2d5c75e23b,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contai
ner{Id:17070db2f8f2dd0a499909c06c4ac4c330a2ecc3d4fa881f7f4467d0a1f56638,PodSandboxId:98d027bab3f8c546bb654b3f856d32f066e128ce4e35fc013a7ef6b0c8c8bf89,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_EXITED,CreatedAt:1765013688244556166,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-272844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00fe05956219890e3f95be82c81de0e4,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cb81cd535b67d209a36ac0ef24eec735d25bcb5e170e0383cbe6939107cc90a,PodSandboxId:b8a1c890443c703b96b1d7c10a843e0ee1a59911f8e167a33563e3be9aefd6d6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765013622375691626,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-28k4p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59d8afc1-3dcd-4ae7-90a5-6b67e8c2020e,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"
TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcae0ad6f528493f7d768a91eb0b5640ab811469bd4881b8d266de4c65a1530e,PodSandboxId:dc5c8b2bf48b1fbeeccf7a7ca3baa9698f9d337ff0403eeaed1b38a42535142b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_EXITED,CreatedAt:1765013621856200358,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name:
kube-proxy-6p7rh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3359dd42-379a-4c92-8146-465d519ab5a1,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:906109c794a06106da1a38cc94530358d3a6b3fa0ebdb8adbb8293f6959a69ab,PodSandboxId:67a78dca14e2ac39f52d6af8b15998bbaacc5b29450a8435bd1544f4e6a29083,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765013610187902668,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-272844,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 964525ffd08fbde6270e57ecf995d051,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1762c3811ed0921bcedf9f48e0729e79858afdbc1a0c39a6db287901e95b83f5,PodSandboxId:02407ed82ca36063f82492c57241580e7555a79fef4b6b8d79716c4e05408d39,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_EXITED,CreatedAt:1765013610096835460,Labels:map[string]string{io.kubernetes.containe
r.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-272844,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 121786fb98873f9b0be41b77da0f836c,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=722df5dd-cef0-49ff-8f93-b3a864548175 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	a70e008e73709       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   10 seconds ago       Running             coredns                   1                   5834180a255d8       coredns-66bc5c9577-28k4p               kube-system
	ac38eff16829f       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   13 seconds ago       Running             kube-apiserver            2                   98d027bab3f8c       kube-apiserver-pause-272844            kube-system
	16b2dde50f602       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   13 seconds ago       Running             kube-controller-manager   2                   e5631ab7f2e34       kube-controller-manager-pause-272844   kube-system
	535ceb9ede5a0       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   36 seconds ago       Running             kube-proxy                1                   645ad8db890e1       kube-proxy-6p7rh                       kube-system
	16ff8a41ee03d       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   36 seconds ago       Running             etcd                      1                   a85f57a2fabeb       etcd-pause-272844                      kube-system
	1eb8eccbfdd03       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   36 seconds ago       Running             kube-scheduler            1                   750d648534368       kube-scheduler-pause-272844            kube-system
	b9eefcf9c509f       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   36 seconds ago       Exited              kube-controller-manager   1                   e5631ab7f2e34       kube-controller-manager-pause-272844   kube-system
	17070db2f8f2d       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   37 seconds ago       Exited              kube-apiserver            1                   98d027bab3f8c       kube-apiserver-pause-272844            kube-system
	5cb81cd535b67       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   About a minute ago   Exited              coredns                   0                   b8a1c890443c7       coredns-66bc5c9577-28k4p               kube-system
	fcae0ad6f5284       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   About a minute ago   Exited              kube-proxy                0                   dc5c8b2bf48b1       kube-proxy-6p7rh                       kube-system
	906109c794a06       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   About a minute ago   Exited              etcd                      0                   67a78dca14e2a       etcd-pause-272844                      kube-system
	1762c3811ed09       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   About a minute ago   Exited              kube-scheduler            0                   02407ed82ca36       kube-scheduler-pause-272844            kube-system
	
	
	==> coredns [5cb81cd535b67d209a36ac0ef24eec735d25bcb5e170e0383cbe6939107cc90a] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54623 - 49864 "HINFO IN 6801714884003529768.4335844317188538113. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.032677052s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a70e008e737097382ad5048e14a95d0660903ecec9ee382a96cb326719634b46] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52882 - 18090 "HINFO IN 2848364569909425307.5377329861672936372. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.141042576s
	
	
	==> describe nodes <==
	Name:               pause-272844
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-272844
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9c863e42b877bb840aec81dfcdcbf173a0ac5fb9
	                    minikube.k8s.io/name=pause-272844
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_06T09_33_36_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 06 Dec 2025 09:33:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-272844
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 06 Dec 2025 09:35:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 06 Dec 2025 09:35:13 +0000   Sat, 06 Dec 2025 09:33:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 06 Dec 2025 09:35:13 +0000   Sat, 06 Dec 2025 09:33:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 06 Dec 2025 09:35:13 +0000   Sat, 06 Dec 2025 09:33:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 06 Dec 2025 09:35:13 +0000   Sat, 06 Dec 2025 09:33:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.157
	  Hostname:    pause-272844
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035908Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035908Ki
	  pods:               110
	System Info:
	  Machine ID:                 526bf343573545918f362d103dc54c5d
	  System UUID:                526bf343-5735-4591-8f36-2d103dc54c5d
	  Boot ID:                    3dc0539b-cd30-4287-beed-5b8708964a60
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-28k4p                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     104s
	  kube-system                 etcd-pause-272844                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         109s
	  kube-system                 kube-apiserver-pause-272844             250m (12%)    0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-controller-manager-pause-272844    200m (10%)    0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-proxy-6p7rh                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 kube-scheduler-pause-272844             100m (5%)     0 (0%)      0 (0%)           0 (0%)         109s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 103s                 kube-proxy       
	  Normal  Starting                 10s                  kube-proxy       
	  Normal  Starting                 116s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  116s (x8 over 116s)  kubelet          Node pause-272844 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    116s (x8 over 116s)  kubelet          Node pause-272844 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     116s (x7 over 116s)  kubelet          Node pause-272844 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  116s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    109s                 kubelet          Node pause-272844 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  109s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  109s                 kubelet          Node pause-272844 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     109s                 kubelet          Node pause-272844 status is now: NodeHasSufficientPID
	  Normal  Starting                 109s                 kubelet          Starting kubelet.
	  Normal  NodeReady                108s                 kubelet          Node pause-272844 status is now: NodeReady
	  Normal  RegisteredNode           105s                 node-controller  Node pause-272844 event: Registered Node pause-272844 in Controller
	  Normal  Starting                 34s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  34s (x8 over 34s)    kubelet          Node pause-272844 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    34s (x8 over 34s)    kubelet          Node pause-272844 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     34s (x7 over 34s)    kubelet          Node pause-272844 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  34s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           8s                   node-controller  Node pause-272844 event: Registered Node pause-272844 in Controller
	
	
	==> dmesg <==
	[Dec 6 09:33] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000042] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.005297] (rpcbind)[120]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.182254] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.087120] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.105984] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.127791] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.921621] kauditd_printk_skb: 18 callbacks suppressed
	[Dec 6 09:34] kauditd_printk_skb: 190 callbacks suppressed
	[  +7.031216] kauditd_printk_skb: 56 callbacks suppressed
	[Dec 6 09:35] kauditd_printk_skb: 260 callbacks suppressed
	[  +1.789479] kauditd_printk_skb: 43 callbacks suppressed
	
	
	==> etcd [16ff8a41ee03d97e00f64bac7e8d01b02620206dc18cbb1f992a1324a3d2808e] <==
	{"level":"warn","ts":"2025-12-06T09:35:12.610849Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:35:12.628947Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:35:12.647968Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:35:12.662048Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:35:12.674040Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:35:12.684625Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:35:12.718413Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:35:12.725492Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:35:12.735376Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:35:12.746648Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:35:12.758558Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:35:12.769930Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:35:12.783937Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:35:12.805473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:35:12.816569Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:35:12.835493Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:35:12.864415Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:35:12.866473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:35:12.877613Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:35:12.883380Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:35:12.894093Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:35:12.905977Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:35:12.917069Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:35:12.926731Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:35:13.007816Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46620","server-name":"","error":"EOF"}
	
	
	==> etcd [906109c794a06106da1a38cc94530358d3a6b3fa0ebdb8adbb8293f6959a69ab] <==
	{"level":"info","ts":"2025-12-06T09:33:45.462943Z","caller":"traceutil/trace.go:172","msg":"trace[188685724] linearizableReadLoop","detail":"{readStateIndex:424; appliedIndex:424; }","duration":"122.219563ms","start":"2025-12-06T09:33:45.340704Z","end":"2025-12-06T09:33:45.462923Z","steps":["trace[188685724] 'read index received'  (duration: 122.214113ms)","trace[188685724] 'applied index is now lower than readState.Index'  (duration: 4.639µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-06T09:33:45.591461Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"250.727961ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-272844\" limit:1 ","response":"range_response_count:1 size:5280"}
	{"level":"info","ts":"2025-12-06T09:33:45.591622Z","caller":"traceutil/trace.go:172","msg":"trace[84994929] range","detail":"{range_begin:/registry/minions/pause-272844; range_end:; response_count:1; response_revision:411; }","duration":"250.909559ms","start":"2025-12-06T09:33:45.340701Z","end":"2025-12-06T09:33:45.591610Z","steps":["trace[84994929] 'agreement among raft nodes before linearized reading'  (duration: 122.384808ms)","trace[84994929] 'range keys from in-memory index tree'  (duration: 128.284267ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-06T09:33:45.592410Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"128.568537ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8706191398168190689 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.50.157\" mod_revision:240 > success:<request_put:<key:\"/registry/masterleases/192.168.50.157\" value_size:67 lease:8706191398168190686 >> failure:<request_range:<key:\"/registry/masterleases/192.168.50.157\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-12-06T09:33:45.592539Z","caller":"traceutil/trace.go:172","msg":"trace[1829246113] transaction","detail":"{read_only:false; response_revision:412; number_of_response:1; }","duration":"253.140578ms","start":"2025-12-06T09:33:45.339355Z","end":"2025-12-06T09:33:45.592495Z","steps":["trace[1829246113] 'process raft request'  (duration: 123.841539ms)","trace[1829246113] 'compare'  (duration: 128.116218ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-06T09:34:10.734008Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"130.833017ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8706191398168190890 > lease_revoke:<id:78d29af302605f31>","response":"size:28"}
	{"level":"info","ts":"2025-12-06T09:34:25.015164Z","caller":"traceutil/trace.go:172","msg":"trace[1608256571] transaction","detail":"{read_only:false; response_revision:436; number_of_response:1; }","duration":"247.375274ms","start":"2025-12-06T09:34:24.767773Z","end":"2025-12-06T09:34:25.015149Z","steps":["trace[1608256571] 'process raft request'  (duration: 246.59802ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-06T09:34:31.925893Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-06T09:34:31.925957Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-272844","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.157:2380"],"advertise-client-urls":["https://192.168.50.157:2379"]}
	{"level":"error","ts":"2025-12-06T09:34:31.926058Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-06T09:34:32.008385Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-06T09:34:32.009837Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-06T09:34:32.009926Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"e23094fafc8078d2","current-leader-member-id":"e23094fafc8078d2"}
	{"level":"info","ts":"2025-12-06T09:34:32.010002Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-12-06T09:34:32.010031Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-12-06T09:34:32.010365Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-06T09:34:32.010459Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-06T09:34:32.010473Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-06T09:34:32.010524Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.157:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-06T09:34:32.010536Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.157:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-06T09:34:32.010548Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.50.157:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-06T09:34:32.013063Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.50.157:2380"}
	{"level":"error","ts":"2025-12-06T09:34:32.013102Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.50.157:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-06T09:34:32.013119Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.50.157:2380"}
	{"level":"info","ts":"2025-12-06T09:34:32.013124Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-272844","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.157:2380"],"advertise-client-urls":["https://192.168.50.157:2379"]}
	
	
	==> kernel <==
	 09:35:25 up 2 min,  0 users,  load average: 0.81, 0.53, 0.21
	Linux pause-272844 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Dec  4 13:30:13 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [17070db2f8f2dd0a499909c06c4ac4c330a2ecc3d4fa881f7f4467d0a1f56638] <==
	I1206 09:34:50.010556       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	W1206 09:34:50.010652       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 09:34:50.010713       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I1206 09:34:50.027524       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1206 09:34:50.065044       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I1206 09:34:50.067314       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1206 09:34:50.067703       1 instance.go:239] Using reconciler: lease
	W1206 09:34:50.070943       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1206 09:34:50.071094       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 09:34:51.012681       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 09:34:51.012834       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 09:34:51.075383       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 09:34:52.395809       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 09:34:52.514068       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 09:34:52.818444       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 09:34:54.700599       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 09:34:54.934513       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 09:34:55.022779       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 09:34:58.348414       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 09:34:58.535649       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 09:34:59.094774       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 09:35:05.030188       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 09:35:05.233397       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 09:35:06.141701       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F1206 09:35:10.068860       1 instance.go:232] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [ac38eff16829fd0d0c91b1838fbe997f15b3aa367eb91ba85cca70d034aecd8d] <==
	I1206 09:35:13.746238       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1206 09:35:13.746144       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1206 09:35:13.746159       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1206 09:35:13.750475       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1206 09:35:13.750589       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1206 09:35:13.750825       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1206 09:35:13.754792       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1206 09:35:13.756474       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	E1206 09:35:13.761156       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1206 09:35:13.764333       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1206 09:35:13.764432       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1206 09:35:13.764354       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1206 09:35:13.766846       1 aggregator.go:171] initial CRD sync complete...
	I1206 09:35:13.766853       1 autoregister_controller.go:144] Starting autoregister controller
	I1206 09:35:13.766858       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1206 09:35:13.766862       1 cache.go:39] Caches are synced for autoregister controller
	I1206 09:35:13.800513       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1206 09:35:14.199971       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1206 09:35:14.559821       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1206 09:35:15.360355       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1206 09:35:15.408853       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1206 09:35:15.460474       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1206 09:35:15.470487       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1206 09:35:17.353449       1 controller.go:667] quota admission added evaluator for: endpoints
	I1206 09:35:17.400849       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [16b2dde50f602c8157807f55eacc428d762585966948b24ce349777091192b33] <==
	I1206 09:35:17.099375       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1206 09:35:17.100601       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1206 09:35:17.101768       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1206 09:35:17.103027       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1206 09:35:17.105337       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1206 09:35:17.106158       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1206 09:35:17.107456       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1206 09:35:17.108641       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1206 09:35:17.111551       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1206 09:35:17.111696       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1206 09:35:17.111749       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1206 09:35:17.111594       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1206 09:35:17.115159       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1206 09:35:17.117928       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1206 09:35:17.119141       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1206 09:35:17.126484       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1206 09:35:17.132503       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1206 09:35:17.136685       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1206 09:35:17.136997       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1206 09:35:17.137373       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1206 09:35:17.142218       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1206 09:35:17.145083       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1206 09:35:17.149236       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1206 09:35:17.149747       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1206 09:35:17.154707       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-controller-manager [b9eefcf9c509f993cb4916b932a1febe3c6c001d4f4bac7e9d02e859875d6d5c] <==
	I1206 09:34:49.691916       1 serving.go:386] Generated self-signed cert in-memory
	I1206 09:34:50.967125       1 controllermanager.go:191] "Starting" version="v1.34.2"
	I1206 09:34:50.967168       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:34:50.970586       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1206 09:34:50.970812       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1206 09:34:50.971097       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1206 09:34:50.971423       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1206 09:35:11.077715       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.50.157:8443/healthz\": dial tcp 192.168.50.157:8443: connect: connection refused"
	
	
	==> kube-proxy [535ceb9ede5a055482ac920e65bf34bc906e333493f3f6928c9f65b1d1661475] <==
	I1206 09:35:14.536533       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1206 09:35:14.636817       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1206 09:35:14.636867       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.50.157"]
	E1206 09:35:14.636944       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 09:35:14.711833       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1206 09:35:14.711905       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1206 09:35:14.711926       1 server_linux.go:132] "Using iptables Proxier"
	I1206 09:35:14.726743       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 09:35:14.727043       1 server.go:527] "Version info" version="v1.34.2"
	I1206 09:35:14.727073       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:35:14.735094       1 config.go:200] "Starting service config controller"
	I1206 09:35:14.735126       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 09:35:14.735145       1 config.go:106] "Starting endpoint slice config controller"
	I1206 09:35:14.735148       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 09:35:14.735158       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 09:35:14.735161       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 09:35:14.735378       1 config.go:309] "Starting node config controller"
	I1206 09:35:14.735490       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 09:35:14.835622       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1206 09:35:14.835664       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1206 09:35:14.835692       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1206 09:35:14.836215       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-proxy [fcae0ad6f528493f7d768a91eb0b5640ab811469bd4881b8d266de4c65a1530e] <==
	I1206 09:33:42.156911       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1206 09:33:42.267253       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1206 09:33:42.267965       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.50.157"]
	E1206 09:33:42.269483       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 09:33:42.370424       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1206 09:33:42.372452       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1206 09:33:42.372500       1 server_linux.go:132] "Using iptables Proxier"
	I1206 09:33:42.387022       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 09:33:42.387369       1 server.go:527] "Version info" version="v1.34.2"
	I1206 09:33:42.387399       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:33:42.392992       1 config.go:200] "Starting service config controller"
	I1206 09:33:42.393038       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 09:33:42.393065       1 config.go:106] "Starting endpoint slice config controller"
	I1206 09:33:42.393080       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 09:33:42.393107       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 09:33:42.393121       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 09:33:42.393756       1 config.go:309] "Starting node config controller"
	I1206 09:33:42.393854       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 09:33:42.393945       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 09:33:42.494190       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1206 09:33:42.494242       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1206 09:33:42.494342       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [1762c3811ed0921bcedf9f48e0729e79858afdbc1a0c39a6db287901e95b83f5] <==
	E1206 09:33:33.438191       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1206 09:33:34.310259       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1206 09:33:34.323422       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1206 09:33:34.346800       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1206 09:33:34.376819       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1206 09:33:34.408359       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1206 09:33:34.501234       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1206 09:33:34.530785       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1206 09:33:34.530883       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1206 09:33:34.654952       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1206 09:33:34.660627       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1206 09:33:34.692947       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1206 09:33:34.767489       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1206 09:33:34.818665       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1206 09:33:34.858753       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1206 09:33:34.859461       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1206 09:33:34.881493       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1206 09:33:34.940476       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1206 09:33:37.819782       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 09:34:31.935466       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1206 09:34:31.935519       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1206 09:34:31.935539       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1206 09:34:31.935577       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 09:34:31.935816       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1206 09:34:31.935859       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [1eb8eccbfdd039b2a3ca7a044ee9dd43890db617eee8796dcbb720d46d9ae8f0] <==
	E1206 09:35:11.092894       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.50.157:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.50.157:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1206 09:35:11.092952       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.50.157:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.50.157:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1206 09:35:11.093057       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.50.157:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.50.157:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1206 09:35:11.093087       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.50.157:8443/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.50.157:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1206 09:35:11.093253       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.50.157:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.50.157:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1206 09:35:13.671378       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1206 09:35:13.673537       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1206 09:35:13.673665       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1206 09:35:13.673710       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1206 09:35:13.673754       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1206 09:35:13.673792       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1206 09:35:13.673828       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1206 09:35:13.673874       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1206 09:35:13.673911       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1206 09:35:13.673945       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1206 09:35:13.673980       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1206 09:35:13.674008       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1206 09:35:13.685771       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1206 09:35:13.686148       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1206 09:35:13.686467       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1206 09:35:13.686731       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1206 09:35:13.686931       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1206 09:35:13.687151       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1206 09:35:13.703501       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1206 09:35:15.788914       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 06 09:35:11 pause-272844 kubelet[3392]: E1206 09:35:11.585124    3392 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.50.157:8443/api/v1/nodes/pause-272844\": dial tcp 192.168.50.157:8443: connect: connection refused"
	Dec 06 09:35:12 pause-272844 kubelet[3392]: E1206 09:35:12.392868    3392 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-272844\" not found" node="pause-272844"
	Dec 06 09:35:12 pause-272844 kubelet[3392]: E1206 09:35:12.400927    3392 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-272844\" not found" node="pause-272844"
	Dec 06 09:35:12 pause-272844 kubelet[3392]: E1206 09:35:12.402147    3392 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-272844\" not found" node="pause-272844"
	Dec 06 09:35:12 pause-272844 kubelet[3392]: I1206 09:35:12.884381    3392 kubelet_node_status.go:75] "Attempting to register node" node="pause-272844"
	Dec 06 09:35:13 pause-272844 kubelet[3392]: E1206 09:35:13.405620    3392 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-272844\" not found" node="pause-272844"
	Dec 06 09:35:13 pause-272844 kubelet[3392]: E1206 09:35:13.405991    3392 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-272844\" not found" node="pause-272844"
	Dec 06 09:35:13 pause-272844 kubelet[3392]: E1206 09:35:13.406200    3392 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-272844\" not found" node="pause-272844"
	Dec 06 09:35:13 pause-272844 kubelet[3392]: I1206 09:35:13.778611    3392 kubelet_node_status.go:124] "Node was previously registered" node="pause-272844"
	Dec 06 09:35:13 pause-272844 kubelet[3392]: I1206 09:35:13.778701    3392 kubelet_node_status.go:78] "Successfully registered node" node="pause-272844"
	Dec 06 09:35:13 pause-272844 kubelet[3392]: E1206 09:35:13.778722    3392 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"pause-272844\": node \"pause-272844\" not found"
	Dec 06 09:35:13 pause-272844 kubelet[3392]: I1206 09:35:13.781640    3392 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 06 09:35:13 pause-272844 kubelet[3392]: I1206 09:35:13.783130    3392 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 06 09:35:13 pause-272844 kubelet[3392]: E1206 09:35:13.796954    3392 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"pause-272844\" not found"
	Dec 06 09:35:13 pause-272844 kubelet[3392]: E1206 09:35:13.897164    3392 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"pause-272844\" not found"
	Dec 06 09:35:13 pause-272844 kubelet[3392]: E1206 09:35:13.998338    3392 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"pause-272844\" not found"
	Dec 06 09:35:14 pause-272844 kubelet[3392]: I1206 09:35:14.057972    3392 apiserver.go:52] "Watching apiserver"
	Dec 06 09:35:14 pause-272844 kubelet[3392]: I1206 09:35:14.105434    3392 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 06 09:35:14 pause-272844 kubelet[3392]: I1206 09:35:14.197373    3392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3359dd42-379a-4c92-8146-465d519ab5a1-lib-modules\") pod \"kube-proxy-6p7rh\" (UID: \"3359dd42-379a-4c92-8146-465d519ab5a1\") " pod="kube-system/kube-proxy-6p7rh"
	Dec 06 09:35:14 pause-272844 kubelet[3392]: I1206 09:35:14.197446    3392 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3359dd42-379a-4c92-8146-465d519ab5a1-xtables-lock\") pod \"kube-proxy-6p7rh\" (UID: \"3359dd42-379a-4c92-8146-465d519ab5a1\") " pod="kube-system/kube-proxy-6p7rh"
	Dec 06 09:35:14 pause-272844 kubelet[3392]: I1206 09:35:14.363535    3392 scope.go:117] "RemoveContainer" containerID="fcae0ad6f528493f7d768a91eb0b5640ab811469bd4881b8d266de4c65a1530e"
	Dec 06 09:35:14 pause-272844 kubelet[3392]: I1206 09:35:14.433195    3392 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-272844"
	Dec 06 09:35:14 pause-272844 kubelet[3392]: E1206 09:35:14.464443    3392 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-272844\" already exists" pod="kube-system/kube-apiserver-pause-272844"
	Dec 06 09:35:21 pause-272844 kubelet[3392]: E1206 09:35:21.294668    3392 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765013721292177214 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:124341} inodes_used:{value:48}}"
	Dec 06 09:35:21 pause-272844 kubelet[3392]: E1206 09:35:21.294758    3392 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765013721292177214 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:124341} inodes_used:{value:48}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-272844 -n pause-272844
helpers_test.go:269: (dbg) Run:  kubectl --context pause-272844 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (67.43s)

                                                
                                    

Test pass (375/431)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 26.46
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.16
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.2/json-events 11.2
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.07
18 TestDownloadOnly/v1.34.2/DeleteAll 0.16
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.15
21 TestDownloadOnly/v1.35.0-beta.0/json-events 13.56
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.08
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.17
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.15
30 TestBinaryMirror 0.66
31 TestOffline 59.09
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 139.11
40 TestAddons/serial/GCPAuth/Namespaces 0.15
41 TestAddons/serial/GCPAuth/FakeCredentials 12.54
44 TestAddons/parallel/Registry 19.42
45 TestAddons/parallel/RegistryCreds 0.79
47 TestAddons/parallel/InspektorGadget 12.24
48 TestAddons/parallel/MetricsServer 7.58
50 TestAddons/parallel/CSI 56.29
51 TestAddons/parallel/Headlamp 21.17
52 TestAddons/parallel/CloudSpanner 6.64
53 TestAddons/parallel/LocalPath 13.12
54 TestAddons/parallel/NvidiaDevicePlugin 7.11
55 TestAddons/parallel/Yakd 11.01
57 TestAddons/StoppedEnableDisable 87.89
58 TestCertOptions 59.01
59 TestCertExpiration 312.94
61 TestForceSystemdFlag 82.58
62 TestForceSystemdEnv 61.97
67 TestErrorSpam/setup 42.02
68 TestErrorSpam/start 0.32
69 TestErrorSpam/status 0.7
70 TestErrorSpam/pause 1.52
71 TestErrorSpam/unpause 1.78
72 TestErrorSpam/stop 5.57
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 80.24
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 39.1
79 TestFunctional/serial/KubeContext 0.04
80 TestFunctional/serial/KubectlGetPods 0.08
83 TestFunctional/serial/CacheCmd/cache/add_remote 3.23
84 TestFunctional/serial/CacheCmd/cache/add_local 2.25
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.18
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.5
89 TestFunctional/serial/CacheCmd/cache/delete 0.12
90 TestFunctional/serial/MinikubeKubectlCmd 0.12
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
92 TestFunctional/serial/ExtraConfig 50.68
93 TestFunctional/serial/ComponentHealth 0.07
94 TestFunctional/serial/LogsCmd 1.31
95 TestFunctional/serial/LogsFileCmd 1.34
96 TestFunctional/serial/InvalidService 4.94
98 TestFunctional/parallel/ConfigCmd 0.41
99 TestFunctional/parallel/DashboardCmd 14.07
100 TestFunctional/parallel/DryRun 0.23
101 TestFunctional/parallel/InternationalLanguage 0.12
102 TestFunctional/parallel/StatusCmd 0.76
106 TestFunctional/parallel/ServiceCmdConnect 20.43
107 TestFunctional/parallel/AddonsCmd 0.14
110 TestFunctional/parallel/SSHCmd 0.34
111 TestFunctional/parallel/CpCmd 1.12
112 TestFunctional/parallel/MySQL 23.57
113 TestFunctional/parallel/FileSync 0.17
114 TestFunctional/parallel/CertSync 1.1
118 TestFunctional/parallel/NodeLabels 0.08
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.37
122 TestFunctional/parallel/License 0.34
123 TestFunctional/parallel/Version/short 0.07
124 TestFunctional/parallel/Version/components 0.51
125 TestFunctional/parallel/ImageCommands/ImageListShort 0.18
126 TestFunctional/parallel/ImageCommands/ImageListTable 0.26
127 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
128 TestFunctional/parallel/ImageCommands/ImageListYaml 0.19
129 TestFunctional/parallel/ImageCommands/ImageBuild 4.79
130 TestFunctional/parallel/ImageCommands/Setup 1.97
131 TestFunctional/parallel/UpdateContextCmd/no_changes 0.08
132 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.07
133 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.07
134 TestFunctional/parallel/ProfileCmd/profile_not_create 0.32
144 TestFunctional/parallel/ProfileCmd/profile_list 0.31
145 TestFunctional/parallel/ProfileCmd/profile_json_output 0.31
146 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.47
147 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.05
148 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.93
149 TestFunctional/parallel/ImageCommands/ImageSaveToFile 7.54
150 TestFunctional/parallel/ImageCommands/ImageRemove 0.51
151 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.07
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.8
153 TestFunctional/parallel/ServiceCmd/DeployApp 8.21
154 TestFunctional/parallel/MountCmd/any-port 9.12
155 TestFunctional/parallel/ServiceCmd/List 0.44
156 TestFunctional/parallel/ServiceCmd/JSONOutput 0.42
157 TestFunctional/parallel/ServiceCmd/HTTPS 0.28
158 TestFunctional/parallel/ServiceCmd/Format 0.42
159 TestFunctional/parallel/ServiceCmd/URL 0.38
160 TestFunctional/parallel/MountCmd/specific-port 1.29
161 TestFunctional/parallel/MountCmd/VerifyCleanup 1.29
162 TestFunctional/delete_echo-server_images 0.04
163 TestFunctional/delete_my-image_image 0.02
164 TestFunctional/delete_minikube_cached_images 0.02
168 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
169 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 72.56
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 52.96
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.04
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 0.07
176 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 3.24
177 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 2.19
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.06
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.06
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.19
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.55
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.13
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 0.12
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 0.12
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 34.78
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 0.06
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 1.29
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 1.27
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 4.82
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.42
192 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd 20.41
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.23
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.1
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 0.74
199 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect 10.5
200 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.16
201 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 50.48
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.34
204 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 1.16
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 36.72
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.18
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 1.17
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 0.06
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.38
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.4
216 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.07
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.61
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.27
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.19
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.19
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.21
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 4.32
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.92
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 1.5
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 10.17
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 0.83
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 1.76
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.52
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.48
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 0.67
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.53
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.33
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.33
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.31
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 14.84
245 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 0.33
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 0.27
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.33
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.33
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.53
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.07
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.07
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.07
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 1.56
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 1.36
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.03
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.01
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.01
261 TestMultiControlPlane/serial/StartCluster 202.35
262 TestMultiControlPlane/serial/DeployApp 7.88
263 TestMultiControlPlane/serial/PingHostFromPods 1.29
264 TestMultiControlPlane/serial/AddWorkerNode 44.64
265 TestMultiControlPlane/serial/NodeLabels 0.07
266 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.68
267 TestMultiControlPlane/serial/CopyFile 10.54
268 TestMultiControlPlane/serial/StopSecondaryNode 86
269 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.5
270 TestMultiControlPlane/serial/RestartSecondaryNode 37.5
271 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.86
272 TestMultiControlPlane/serial/RestartClusterKeepsNodes 374
273 TestMultiControlPlane/serial/DeleteSecondaryNode 18.76
274 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.53
275 TestMultiControlPlane/serial/StopCluster 258.28
276 TestMultiControlPlane/serial/RestartCluster 95.28
277 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.5
278 TestMultiControlPlane/serial/AddSecondaryNode 84.98
279 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.72
284 TestJSONOutput/start/Command 87.64
285 TestJSONOutput/start/Audit 0
287 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
288 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
290 TestJSONOutput/pause/Command 0.72
291 TestJSONOutput/pause/Audit 0
293 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
294 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
296 TestJSONOutput/unpause/Command 0.62
297 TestJSONOutput/unpause/Audit 0
299 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
300 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
302 TestJSONOutput/stop/Command 6.95
303 TestJSONOutput/stop/Audit 0
305 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
306 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
307 TestErrorJSONOutput 0.23
312 TestMainNoArgs 0.06
313 TestMinikubeProfile 81.17
316 TestMountStart/serial/StartWithMountFirst 20
317 TestMountStart/serial/VerifyMountFirst 0.29
318 TestMountStart/serial/StartWithMountSecond 22.33
319 TestMountStart/serial/VerifyMountSecond 0.31
320 TestMountStart/serial/DeleteFirst 0.71
321 TestMountStart/serial/VerifyMountPostDelete 0.31
322 TestMountStart/serial/Stop 1.25
323 TestMountStart/serial/RestartStopped 21.22
324 TestMountStart/serial/VerifyMountPostStop 0.29
327 TestMultiNode/serial/FreshStart2Nodes 105.28
328 TestMultiNode/serial/DeployApp2Nodes 6.5
329 TestMultiNode/serial/PingHostFrom2Pods 0.84
330 TestMultiNode/serial/AddNode 43.78
331 TestMultiNode/serial/MultiNodeLabels 0.07
332 TestMultiNode/serial/ProfileList 0.47
333 TestMultiNode/serial/CopyFile 6.04
334 TestMultiNode/serial/StopNode 2.42
335 TestMultiNode/serial/StartAfterStop 40.79
336 TestMultiNode/serial/RestartKeepsNodes 301.64
337 TestMultiNode/serial/DeleteNode 2.54
338 TestMultiNode/serial/StopMultiNode 174.33
339 TestMultiNode/serial/RestartMultiNode 89.99
340 TestMultiNode/serial/ValidateNameConflict 39.5
347 TestScheduledStopUnix 109.51
351 TestRunningBinaryUpgrade 148.2
353 TestKubernetesUpgrade 195.96
359 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
365 TestPause/serial/Start 102.69
366 TestNoKubernetes/serial/StartWithK8s 82.4
367 TestNoKubernetes/serial/StartWithStopK8s 29.41
369 TestNoKubernetes/serial/Start 31.48
377 TestNetworkPlugins/group/false 3.37
381 TestISOImage/Setup 30.77
382 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
383 TestNoKubernetes/serial/VerifyK8sNotRunning 0.17
384 TestNoKubernetes/serial/ProfileList 15.38
385 TestNoKubernetes/serial/Stop 1.37
386 TestNoKubernetes/serial/StartNoArgs 43.34
388 TestISOImage/Binaries/crictl 0.18
389 TestISOImage/Binaries/curl 0.18
390 TestISOImage/Binaries/docker 0.18
391 TestISOImage/Binaries/git 0.17
392 TestISOImage/Binaries/iptables 0.16
393 TestISOImage/Binaries/podman 0.17
394 TestISOImage/Binaries/rsync 0.16
395 TestISOImage/Binaries/socat 0.17
396 TestISOImage/Binaries/wget 0.17
397 TestISOImage/Binaries/VBoxControl 0.17
398 TestISOImage/Binaries/VBoxService 0.17
399 TestStoppedBinaryUpgrade/Setup 3.16
400 TestStoppedBinaryUpgrade/Upgrade 149.57
401 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.17
402 TestStoppedBinaryUpgrade/MinikubeLogs 1.05
404 TestStartStop/group/old-k8s-version/serial/FirstStart 104.6
406 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 92.4
408 TestStartStop/group/embed-certs/serial/FirstStart 109.56
409 TestStartStop/group/old-k8s-version/serial/DeployApp 11.32
410 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.07
411 TestStartStop/group/old-k8s-version/serial/Stop 86.38
412 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.27
413 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.01
414 TestStartStop/group/default-k8s-diff-port/serial/Stop 87.97
415 TestStartStop/group/embed-certs/serial/DeployApp 10.33
416 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1
417 TestStartStop/group/embed-certs/serial/Stop 90.07
419 TestStartStop/group/no-preload/serial/FirstStart 94.89
420 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.14
421 TestStartStop/group/old-k8s-version/serial/SecondStart 46.63
422 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.13
423 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 45.97
424 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.16
425 TestStartStop/group/embed-certs/serial/SecondStart 44.83
426 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 12.01
427 TestStartStop/group/no-preload/serial/DeployApp 12.53
428 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.13
429 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 18.01
430 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
431 TestStartStop/group/old-k8s-version/serial/Pause 2.94
433 TestStartStop/group/newest-cni/serial/FirstStart 43.64
434 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.26
435 TestStartStop/group/no-preload/serial/Stop 86.55
436 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
437 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 11.01
438 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.22
439 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.82
440 TestNetworkPlugins/group/auto/Start 81.69
441 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.09
442 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.3
443 TestStartStop/group/embed-certs/serial/Pause 3.1
444 TestNetworkPlugins/group/kindnet/Start 67.35
445 TestStartStop/group/newest-cni/serial/DeployApp 0
446 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.95
447 TestStartStop/group/newest-cni/serial/Stop 7.23
448 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.15
449 TestStartStop/group/newest-cni/serial/SecondStart 44.79
450 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.17
451 TestStartStop/group/no-preload/serial/SecondStart 61.03
452 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
453 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
454 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.29
455 TestStartStop/group/newest-cni/serial/Pause 3.99
456 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
457 TestNetworkPlugins/group/auto/KubeletFlags 0.2
458 TestNetworkPlugins/group/auto/NetCatPod 10.29
459 TestNetworkPlugins/group/calico/Start 92.72
460 TestNetworkPlugins/group/kindnet/KubeletFlags 0.18
461 TestNetworkPlugins/group/kindnet/NetCatPod 12.22
462 TestNetworkPlugins/group/auto/DNS 0.21
463 TestNetworkPlugins/group/auto/Localhost 0.16
464 TestNetworkPlugins/group/auto/HairPin 0.15
465 TestNetworkPlugins/group/kindnet/DNS 0.2
466 TestNetworkPlugins/group/kindnet/Localhost 0.17
467 TestNetworkPlugins/group/kindnet/HairPin 0.17
468 TestNetworkPlugins/group/custom-flannel/Start 81.93
469 TestNetworkPlugins/group/enable-default-cni/Start 79.31
470 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 14.01
471 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
472 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
473 TestStartStop/group/no-preload/serial/Pause 3.47
474 TestNetworkPlugins/group/flannel/Start 76.26
475 TestNetworkPlugins/group/calico/ControllerPod 6.01
476 TestNetworkPlugins/group/calico/KubeletFlags 0.2
477 TestNetworkPlugins/group/calico/NetCatPod 13.48
478 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.18
479 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.21
480 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.2
481 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.31
482 TestNetworkPlugins/group/calico/DNS 0.2
483 TestNetworkPlugins/group/calico/Localhost 0.15
484 TestNetworkPlugins/group/calico/HairPin 0.15
485 TestNetworkPlugins/group/custom-flannel/DNS 0.2
486 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
487 TestNetworkPlugins/group/custom-flannel/HairPin 0.21
488 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
489 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
490 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
491 TestNetworkPlugins/group/bridge/Start 83.77
493 TestISOImage/PersistentMounts//data 0.27
494 TestISOImage/PersistentMounts//var/lib/docker 0.16
495 TestISOImage/PersistentMounts//var/lib/cni 0.17
496 TestISOImage/PersistentMounts//var/lib/kubelet 0.17
497 TestISOImage/PersistentMounts//var/lib/minikube 0.16
498 TestISOImage/PersistentMounts//var/lib/toolbox 0.18
499 TestISOImage/PersistentMounts//var/lib/boot2docker 0.16
500 TestISOImage/VersionJSON 0.16
501 TestISOImage/eBPFSupport 0.16
502 TestNetworkPlugins/group/flannel/ControllerPod 6.01
503 TestNetworkPlugins/group/flannel/KubeletFlags 0.18
504 TestNetworkPlugins/group/flannel/NetCatPod 11.23
505 TestNetworkPlugins/group/flannel/DNS 0.16
506 TestNetworkPlugins/group/flannel/Localhost 0.13
507 TestNetworkPlugins/group/flannel/HairPin 0.13
508 TestNetworkPlugins/group/bridge/KubeletFlags 0.16
509 TestNetworkPlugins/group/bridge/NetCatPod 10.23
510 TestNetworkPlugins/group/bridge/DNS 0.15
511 TestNetworkPlugins/group/bridge/Localhost 0.11
512 TestNetworkPlugins/group/bridge/HairPin 0.12
x
+
TestDownloadOnly/v1.28.0/json-events (26.46s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-178504 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-178504 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (26.460503797s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (26.46s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1206 08:28:38.569033    9552 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1206 08:28:38.569123    9552 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22049-5603/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-178504
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-178504: exit status 85 (65.876994ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-178504 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-178504 │ jenkins │ v1.37.0 │ 06 Dec 25 08:28 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 08:28:12
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 08:28:12.160197    9565 out.go:360] Setting OutFile to fd 1 ...
	I1206 08:28:12.160419    9565 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:28:12.160427    9565 out.go:374] Setting ErrFile to fd 2...
	I1206 08:28:12.160432    9565 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:28:12.160637    9565 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5603/.minikube/bin
	W1206 08:28:12.160746    9565 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22049-5603/.minikube/config/config.json: open /home/jenkins/minikube-integration/22049-5603/.minikube/config/config.json: no such file or directory
	I1206 08:28:12.161191    9565 out.go:368] Setting JSON to true
	I1206 08:28:12.162042    9565 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":634,"bootTime":1765009058,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 08:28:12.162095    9565 start.go:143] virtualization: kvm guest
	I1206 08:28:12.166345    9565 out.go:99] [download-only-178504] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1206 08:28:12.166696    9565 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22049-5603/.minikube/cache/preloaded-tarball: no such file or directory
	I1206 08:28:12.166787    9565 notify.go:221] Checking for updates...
	I1206 08:28:12.168402    9565 out.go:171] MINIKUBE_LOCATION=22049
	I1206 08:28:12.169491    9565 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 08:28:12.170553    9565 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22049-5603/kubeconfig
	I1206 08:28:12.171614    9565 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-5603/.minikube
	I1206 08:28:12.172620    9565 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1206 08:28:12.174547    9565 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1206 08:28:12.174777    9565 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 08:28:12.635643    9565 out.go:99] Using the kvm2 driver based on user configuration
	I1206 08:28:12.635682    9565 start.go:309] selected driver: kvm2
	I1206 08:28:12.635690    9565 start.go:927] validating driver "kvm2" against <nil>
	I1206 08:28:12.636021    9565 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1206 08:28:12.636562    9565 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1206 08:28:12.636753    9565 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1206 08:28:12.636782    9565 cni.go:84] Creating CNI manager for ""
	I1206 08:28:12.636840    9565 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 08:28:12.636853    9565 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1206 08:28:12.636901    9565 start.go:353] cluster config:
	{Name:download-only-178504 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-178504 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 08:28:12.637134    9565 iso.go:125] acquiring lock: {Name:mk30cf35cfaf5c28a2b5f78c7b431de5eb8c8e82 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 08:28:12.638550    9565 out.go:99] Downloading VM boot image ...
	I1206 08:28:12.638588    9565 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso.sha256 -> /home/jenkins/minikube-integration/22049-5603/.minikube/cache/iso/amd64/minikube-v1.37.0-1764843329-22032-amd64.iso
	I1206 08:28:24.788909    9565 out.go:99] Starting "download-only-178504" primary control-plane node in "download-only-178504" cluster
	I1206 08:28:24.788938    9565 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1206 08:28:24.895077    9565 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1206 08:28:24.895109    9565 cache.go:65] Caching tarball of preloaded images
	I1206 08:28:24.895298    9565 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1206 08:28:24.897124    9565 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1206 08:28:24.897140    9565 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1206 08:28:25.006922    9565 preload.go:295] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1206 08:28:25.007088    9565 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/22049-5603/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-178504 host does not exist
	  To start a cluster, run: "minikube start -p download-only-178504"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-178504
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (11.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-445198 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-445198 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (11.202087064s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (11.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1206 08:28:50.130845    9552 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1206 08:28:50.130882    9552 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22049-5603/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-445198
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-445198: exit status 85 (72.026333ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-178504 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-178504 │ jenkins │ v1.37.0 │ 06 Dec 25 08:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                   │ minikube             │ jenkins │ v1.37.0 │ 06 Dec 25 08:28 UTC │ 06 Dec 25 08:28 UTC │
	│ delete  │ -p download-only-178504                                                                                                                                                 │ download-only-178504 │ jenkins │ v1.37.0 │ 06 Dec 25 08:28 UTC │ 06 Dec 25 08:28 UTC │
	│ start   │ -o=json --download-only -p download-only-445198 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-445198 │ jenkins │ v1.37.0 │ 06 Dec 25 08:28 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 08:28:38
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 08:28:38.980841    9843 out.go:360] Setting OutFile to fd 1 ...
	I1206 08:28:38.980939    9843 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:28:38.980946    9843 out.go:374] Setting ErrFile to fd 2...
	I1206 08:28:38.980951    9843 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:28:38.981131    9843 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5603/.minikube/bin
	I1206 08:28:38.981607    9843 out.go:368] Setting JSON to true
	I1206 08:28:38.982359    9843 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":661,"bootTime":1765009058,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 08:28:38.982416    9843 start.go:143] virtualization: kvm guest
	I1206 08:28:38.984144    9843 out.go:99] [download-only-445198] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 08:28:38.984303    9843 notify.go:221] Checking for updates...
	I1206 08:28:38.985418    9843 out.go:171] MINIKUBE_LOCATION=22049
	I1206 08:28:38.987498    9843 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 08:28:38.988672    9843 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22049-5603/kubeconfig
	I1206 08:28:38.989707    9843 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-5603/.minikube
	I1206 08:28:38.990700    9843 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1206 08:28:38.992698    9843 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1206 08:28:38.992894    9843 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 08:28:39.023368    9843 out.go:99] Using the kvm2 driver based on user configuration
	I1206 08:28:39.023390    9843 start.go:309] selected driver: kvm2
	I1206 08:28:39.023397    9843 start.go:927] validating driver "kvm2" against <nil>
	I1206 08:28:39.023714    9843 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1206 08:28:39.024238    9843 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1206 08:28:39.024430    9843 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1206 08:28:39.024482    9843 cni.go:84] Creating CNI manager for ""
	I1206 08:28:39.024538    9843 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 08:28:39.024549    9843 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1206 08:28:39.024613    9843 start.go:353] cluster config:
	{Name:download-only-445198 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:download-only-445198 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 08:28:39.024715    9843 iso.go:125] acquiring lock: {Name:mk30cf35cfaf5c28a2b5f78c7b431de5eb8c8e82 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 08:28:39.025818    9843 out.go:99] Starting "download-only-445198" primary control-plane node in "download-only-445198" cluster
	I1206 08:28:39.025839    9843 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 08:28:39.206645    9843 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1206 08:28:39.206673    9843 cache.go:65] Caching tarball of preloaded images
	I1206 08:28:39.206842    9843 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 08:28:39.208322    9843 out.go:99] Downloading Kubernetes v1.34.2 preload ...
	I1206 08:28:39.208336    9843 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1206 08:28:39.319417    9843 preload.go:295] Got checksum from GCS API "40ac2ac600e3e4b9dc7a3f8c6cb2ed91"
	I1206 08:28:39.319462    9843 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:40ac2ac600e3e4b9dc7a3f8c6cb2ed91 -> /home/jenkins/minikube-integration/22049-5603/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-445198 host does not exist
	  To start a cluster, run: "minikube start -p download-only-445198"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-445198
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (13.56s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-807354 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-807354 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (13.560020263s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (13.56s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1206 08:29:04.074217    9552 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
I1206 08:29:04.074282    9552 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22049-5603/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-807354
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-807354: exit status 85 (77.540084ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                      ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-178504 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio        │ download-only-178504 │ jenkins │ v1.37.0 │ 06 Dec 25 08:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                          │ minikube             │ jenkins │ v1.37.0 │ 06 Dec 25 08:28 UTC │ 06 Dec 25 08:28 UTC │
	│ delete  │ -p download-only-178504                                                                                                                                                        │ download-only-178504 │ jenkins │ v1.37.0 │ 06 Dec 25 08:28 UTC │ 06 Dec 25 08:28 UTC │
	│ start   │ -o=json --download-only -p download-only-445198 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio        │ download-only-445198 │ jenkins │ v1.37.0 │ 06 Dec 25 08:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                          │ minikube             │ jenkins │ v1.37.0 │ 06 Dec 25 08:28 UTC │ 06 Dec 25 08:28 UTC │
	│ delete  │ -p download-only-445198                                                                                                                                                        │ download-only-445198 │ jenkins │ v1.37.0 │ 06 Dec 25 08:28 UTC │ 06 Dec 25 08:28 UTC │
	│ start   │ -o=json --download-only -p download-only-807354 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-807354 │ jenkins │ v1.37.0 │ 06 Dec 25 08:28 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 08:28:50
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 08:28:50.567930   10039 out.go:360] Setting OutFile to fd 1 ...
	I1206 08:28:50.568047   10039 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:28:50.568053   10039 out.go:374] Setting ErrFile to fd 2...
	I1206 08:28:50.568056   10039 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:28:50.568284   10039 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5603/.minikube/bin
	I1206 08:28:50.568800   10039 out.go:368] Setting JSON to true
	I1206 08:28:50.570069   10039 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":672,"bootTime":1765009058,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 08:28:50.570122   10039 start.go:143] virtualization: kvm guest
	I1206 08:28:50.571932   10039 out.go:99] [download-only-807354] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 08:28:50.572094   10039 notify.go:221] Checking for updates...
	I1206 08:28:50.573391   10039 out.go:171] MINIKUBE_LOCATION=22049
	I1206 08:28:50.575229   10039 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 08:28:50.576348   10039 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22049-5603/kubeconfig
	I1206 08:28:50.577488   10039 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-5603/.minikube
	I1206 08:28:50.578540   10039 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1206 08:28:50.580561   10039 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1206 08:28:50.580797   10039 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 08:28:50.612246   10039 out.go:99] Using the kvm2 driver based on user configuration
	I1206 08:28:50.612269   10039 start.go:309] selected driver: kvm2
	I1206 08:28:50.612274   10039 start.go:927] validating driver "kvm2" against <nil>
	I1206 08:28:50.612623   10039 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1206 08:28:50.613136   10039 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1206 08:28:50.613277   10039 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1206 08:28:50.613311   10039 cni.go:84] Creating CNI manager for ""
	I1206 08:28:50.613355   10039 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 08:28:50.613364   10039 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1206 08:28:50.613416   10039 start.go:353] cluster config:
	{Name:download-only-807354 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:download-only-807354 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 08:28:50.613528   10039 iso.go:125] acquiring lock: {Name:mk30cf35cfaf5c28a2b5f78c7b431de5eb8c8e82 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 08:28:50.614919   10039 out.go:99] Starting "download-only-807354" primary control-plane node in "download-only-807354" cluster
	I1206 08:28:50.614940   10039 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 08:28:50.714851   10039 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1206 08:28:50.714878   10039 cache.go:65] Caching tarball of preloaded images
	I1206 08:28:50.715044   10039 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 08:28:50.716781   10039 out.go:99] Downloading Kubernetes v1.35.0-beta.0 preload ...
	I1206 08:28:50.716811   10039 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1206 08:28:50.835879   10039 preload.go:295] Got checksum from GCS API "b4861df7675d96066744278d08e2cd35"
	I1206 08:28:50.835942   10039 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:b4861df7675d96066744278d08e2cd35 -> /home/jenkins/minikube-integration/22049-5603/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-807354 host does not exist
	  To start a cluster, run: "minikube start -p download-only-807354"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-807354
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.66s)

                                                
                                                
=== RUN   TestBinaryMirror
I1206 08:29:04.917737    9552 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-499439 --alsologtostderr --binary-mirror http://127.0.0.1:45531 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-499439" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-499439
--- PASS: TestBinaryMirror (0.66s)

                                                
                                    
x
+
TestOffline (59.09s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-991776 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-991776 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (58.24910295s)
helpers_test.go:175: Cleaning up "offline-crio-991776" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-991776
--- PASS: TestOffline (59.09s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-618522
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-618522: exit status 85 (65.449126ms)

                                                
                                                
-- stdout --
	* Profile "addons-618522" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-618522"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-618522
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-618522: exit status 85 (64.876013ms)

                                                
                                                
-- stdout --
	* Profile "addons-618522" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-618522"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (139.11s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-618522 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-618522 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m19.106519243s)
--- PASS: TestAddons/Setup (139.11s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-618522 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-618522 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (12.54s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-618522 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-618522 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [28642f2b-ea29-4744-a69a-ca5940220bc5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [28642f2b-ea29-4744-a69a-ca5940220bc5] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 12.004816584s
addons_test.go:694: (dbg) Run:  kubectl --context addons-618522 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-618522 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-618522 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (12.54s)

                                                
                                    
x
+
TestAddons/parallel/Registry (19.42s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 11.32589ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-45g8h" [9bf3de1f-8c67-4f56-8ed4-4820b8abc96d] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.009783722s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-nj49l" [6b459c6d-2dff-4d22-afc5-16895571af55] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.005364464s
addons_test.go:392: (dbg) Run:  kubectl --context addons-618522 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-618522 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-618522 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.341533499s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-618522 ip
2025/12/06 08:32:04 [DEBUG] GET http://192.168.39.168:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-618522 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (19.42s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.79s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 17.918614ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-618522
addons_test.go:332: (dbg) Run:  kubectl --context addons-618522 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-618522 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.79s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.24s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-b29gf" [b87c75ea-dc95-4f61-885f-1c84e4926027] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004223185s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-618522 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-618522 addons disable inspektor-gadget --alsologtostderr -v=1: (6.234974856s)
--- PASS: TestAddons/parallel/InspektorGadget (12.24s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (7.58s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 7.066138ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-9tv6q" [1acee34d-7cc9-4f91-81a5-5af04cf36b68] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.009151405s
addons_test.go:463: (dbg) Run:  kubectl --context addons-618522 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-618522 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-618522 addons disable metrics-server --alsologtostderr -v=1: (1.485753946s)
--- PASS: TestAddons/parallel/MetricsServer (7.58s)

                                                
                                    
x
+
TestAddons/parallel/CSI (56.29s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1206 08:32:12.277092    9552 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1206 08:32:12.282791    9552 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1206 08:32:12.282818    9552 kapi.go:107] duration metric: took 5.72888ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 5.738007ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-618522 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-618522 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-618522 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-618522 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-618522 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-618522 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-618522 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-618522 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-618522 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-618522 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [2f36c658-7cc2-4729-ada6-a9a51b0985fd] Pending
helpers_test.go:352: "task-pv-pod" [2f36c658-7cc2-4729-ada6-a9a51b0985fd] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [2f36c658-7cc2-4729-ada6-a9a51b0985fd] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.004867586s
addons_test.go:572: (dbg) Run:  kubectl --context addons-618522 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-618522 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:435: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:427: (dbg) Run:  kubectl --context addons-618522 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-618522 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-618522 delete pod task-pv-pod: (1.208244877s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-618522 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-618522 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-618522 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-618522 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-618522 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-618522 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-618522 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-618522 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-618522 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-618522 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-618522 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-618522 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-618522 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-618522 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-618522 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-618522 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-618522 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-618522 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-618522 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-618522 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-618522 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [91632ce5-cf31-418c-97f4-ef3720d5cfe4] Pending
helpers_test.go:352: "task-pv-pod-restore" [91632ce5-cf31-418c-97f4-ef3720d5cfe4] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [91632ce5-cf31-418c-97f4-ef3720d5cfe4] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.005237855s
addons_test.go:614: (dbg) Run:  kubectl --context addons-618522 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-618522 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-618522 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-618522 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-618522 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-618522 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.116541635s)
--- PASS: TestAddons/parallel/CSI (56.29s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (21.17s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-618522 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-dfcdc64b-7gvpk" [d18e1cc4-0c6b-47f8-911d-4c38d1190bea] Pending
helpers_test.go:352: "headlamp-dfcdc64b-7gvpk" [d18e1cc4-0c6b-47f8-911d-4c38d1190bea] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-dfcdc64b-7gvpk" [d18e1cc4-0c6b-47f8-911d-4c38d1190bea] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.004023759s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-618522 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-618522 addons disable headlamp --alsologtostderr -v=1: (6.193449299s)
--- PASS: TestAddons/parallel/Headlamp (21.17s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.64s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-5bdddb765-8ff42" [dbc978c2-35c6-425b-9883-a9a12d118bf5] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.006337211s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-618522 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.64s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (13.12s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-618522 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-618522 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-618522 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-618522 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-618522 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-618522 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-618522 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-618522 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-618522 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [4972c186-3e92-4604-9575-0e41298edf72] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [4972c186-3e92-4604-9575-0e41298edf72] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [4972c186-3e92-4604-9575-0e41298edf72] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.006308647s
addons_test.go:967: (dbg) Run:  kubectl --context addons-618522 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-618522 ssh "cat /opt/local-path-provisioner/pvc-c8bb1d8f-4c87-4fdb-8a4a-d380c7c73589_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-618522 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-618522 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-618522 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (13.12s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (7.11s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-mgdnq" [ba7d5636-4bd4-4737-a2f4-8b93aadfc08d] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004792353s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-618522 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-618522 addons disable nvidia-device-plugin --alsologtostderr -v=1: (1.107021044s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (7.11s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-cvnkq" [df2c54f9-9845-40b8-9ccf-1a68524ff089] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.011112226s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-618522 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-618522 addons disable yakd --alsologtostderr -v=1: (6.002671393s)
--- PASS: TestAddons/parallel/Yakd (11.01s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (87.89s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-618522
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-618522: (1m27.694093456s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-618522
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-618522
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-618522
--- PASS: TestAddons/StoppedEnableDisable (87.89s)

                                                
                                    
x
+
TestCertOptions (59.01s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-935108 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-935108 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (57.844116887s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-935108 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-935108 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-935108 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-935108" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-935108
--- PASS: TestCertOptions (59.01s)

                                                
                                    
x
+
TestCertExpiration (312.94s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-869210 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-869210 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m32.371857151s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-869210 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-869210 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (39.778385769s)
helpers_test.go:175: Cleaning up "cert-expiration-869210" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-869210
--- PASS: TestCertExpiration (312.94s)

                                                
                                    
x
+
TestForceSystemdFlag (82.58s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-436157 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-436157 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m21.518074426s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-436157 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-436157" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-436157
--- PASS: TestForceSystemdFlag (82.58s)

                                                
                                    
x
+
TestForceSystemdEnv (61.97s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-126461 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-126461 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m1.108112401s)
helpers_test.go:175: Cleaning up "force-systemd-env-126461" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-126461
--- PASS: TestForceSystemdEnv (61.97s)

                                                
                                    
x
+
TestErrorSpam/setup (42.02s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-199701 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-199701 --driver=kvm2  --container-runtime=crio
E1206 08:36:25.409717    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:36:25.417812    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:36:25.429232    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:36:25.450650    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:36:25.492022    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:36:25.573454    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:36:25.734971    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:36:26.056692    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:36:26.698746    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:36:27.980759    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:36:30.542186    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:36:35.663713    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-199701 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-199701 --driver=kvm2  --container-runtime=crio: (42.024127698s)
--- PASS: TestErrorSpam/setup (42.02s)

                                                
                                    
x
+
TestErrorSpam/start (0.32s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-199701 --log_dir /tmp/nospam-199701 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-199701 --log_dir /tmp/nospam-199701 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-199701 --log_dir /tmp/nospam-199701 start --dry-run
--- PASS: TestErrorSpam/start (0.32s)

                                                
                                    
x
+
TestErrorSpam/status (0.7s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-199701 --log_dir /tmp/nospam-199701 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-199701 --log_dir /tmp/nospam-199701 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-199701 --log_dir /tmp/nospam-199701 status
--- PASS: TestErrorSpam/status (0.70s)

                                                
                                    
x
+
TestErrorSpam/pause (1.52s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-199701 --log_dir /tmp/nospam-199701 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-199701 --log_dir /tmp/nospam-199701 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-199701 --log_dir /tmp/nospam-199701 pause
--- PASS: TestErrorSpam/pause (1.52s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.78s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-199701 --log_dir /tmp/nospam-199701 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-199701 --log_dir /tmp/nospam-199701 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-199701 --log_dir /tmp/nospam-199701 unpause
--- PASS: TestErrorSpam/unpause (1.78s)

                                                
                                    
x
+
TestErrorSpam/stop (5.57s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-199701 --log_dir /tmp/nospam-199701 stop
E1206 08:36:45.905281    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-199701 --log_dir /tmp/nospam-199701 stop: (2.4151768s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-199701 --log_dir /tmp/nospam-199701 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-199701 --log_dir /tmp/nospam-199701 stop: (1.941775621s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-199701 --log_dir /tmp/nospam-199701 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-199701 --log_dir /tmp/nospam-199701 stop: (1.207622647s)
--- PASS: TestErrorSpam/stop (5.57s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22049-5603/.minikube/files/etc/test/nested/copy/9552/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (80.24s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-171063 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E1206 08:37:06.386631    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:37:47.349872    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-171063 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m20.243219494s)
--- PASS: TestFunctional/serial/StartWithProxy (80.24s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (39.1s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1206 08:38:10.792230    9552 config.go:182] Loaded profile config "functional-171063": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-171063 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-171063 --alsologtostderr -v=8: (39.095361031s)
functional_test.go:678: soft start took 39.096003478s for "functional-171063" cluster.
I1206 08:38:49.887916    9552 config.go:182] Loaded profile config "functional-171063": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (39.10s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-171063 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-171063 cache add registry.k8s.io/pause:3.1: (1.048968372s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-171063 cache add registry.k8s.io/pause:3.3: (1.101175561s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-171063 cache add registry.k8s.io/pause:latest: (1.078509979s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-171063 /tmp/TestFunctionalserialCacheCmdcacheadd_local551370510/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 cache add minikube-local-cache-test:functional-171063
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-171063 cache add minikube-local-cache-test:functional-171063: (1.920887178s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 cache delete minikube-local-cache-test:functional-171063
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-171063
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.5s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-171063 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (172.992435ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.50s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 kubectl -- --context functional-171063 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-171063 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (50.68s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-171063 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1206 08:39:09.271600    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-171063 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (50.680614967s)
functional_test.go:776: restart took 50.680725352s for "functional-171063" cluster.
I1206 08:39:48.325920    9552 config.go:182] Loaded profile config "functional-171063": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (50.68s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-171063 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.31s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-171063 logs: (1.307794177s)
--- PASS: TestFunctional/serial/LogsCmd (1.31s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.34s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 logs --file /tmp/TestFunctionalserialLogsFileCmd1326015856/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-171063 logs --file /tmp/TestFunctionalserialLogsFileCmd1326015856/001/logs.txt: (1.334915996s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.34s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.94s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-171063 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-171063
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-171063: exit status 115 (219.60816ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL             │
	├───────────┼─────────────┼─────────────┼────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.67:32601 │
	└───────────┴─────────────┴─────────────┴────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-171063 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-171063 delete -f testdata/invalidsvc.yaml: (1.525592522s)
--- PASS: TestFunctional/serial/InvalidService (4.94s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-171063 config get cpus: exit status 14 (64.616777ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-171063 config get cpus: exit status 14 (70.047795ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-171063 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-171063 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 16008: os: process already finished
E1206 08:41:25.408868    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:41:53.113673    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/DashboardCmd (14.07s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-171063 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-171063 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (110.225029ms)

                                                
                                                
-- stdout --
	* [functional-171063] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22049
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22049-5603/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-5603/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 08:40:20.770677   15912 out.go:360] Setting OutFile to fd 1 ...
	I1206 08:40:20.770927   15912 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:40:20.770936   15912 out.go:374] Setting ErrFile to fd 2...
	I1206 08:40:20.770940   15912 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:40:20.771147   15912 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5603/.minikube/bin
	I1206 08:40:20.771588   15912 out.go:368] Setting JSON to false
	I1206 08:40:20.772426   15912 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":1363,"bootTime":1765009058,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 08:40:20.772490   15912 start.go:143] virtualization: kvm guest
	I1206 08:40:20.774525   15912 out.go:179] * [functional-171063] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 08:40:20.775927   15912 notify.go:221] Checking for updates...
	I1206 08:40:20.775951   15912 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 08:40:20.777805   15912 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 08:40:20.779334   15912 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-5603/kubeconfig
	I1206 08:40:20.780601   15912 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-5603/.minikube
	I1206 08:40:20.781845   15912 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 08:40:20.783124   15912 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 08:40:20.784730   15912 config.go:182] Loaded profile config "functional-171063": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 08:40:20.785227   15912 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 08:40:20.816793   15912 out.go:179] * Using the kvm2 driver based on existing profile
	I1206 08:40:20.818037   15912 start.go:309] selected driver: kvm2
	I1206 08:40:20.818055   15912 start.go:927] validating driver "kvm2" against &{Name:functional-171063 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:functional-171063 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 08:40:20.818170   15912 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 08:40:20.820265   15912 out.go:203] 
	W1206 08:40:20.822025   15912 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1206 08:40:20.823306   15912 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-171063 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-171063 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-171063 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (114.852718ms)

                                                
                                                
-- stdout --
	* [functional-171063] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22049
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22049-5603/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-5603/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 08:40:19.051998   15773 out.go:360] Setting OutFile to fd 1 ...
	I1206 08:40:19.052548   15773 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:40:19.052563   15773 out.go:374] Setting ErrFile to fd 2...
	I1206 08:40:19.052570   15773 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:40:19.053196   15773 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5603/.minikube/bin
	I1206 08:40:19.053943   15773 out.go:368] Setting JSON to false
	I1206 08:40:19.054826   15773 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":1361,"bootTime":1765009058,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 08:40:19.054915   15773 start.go:143] virtualization: kvm guest
	I1206 08:40:19.056534   15773 out.go:179] * [functional-171063] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1206 08:40:19.057825   15773 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 08:40:19.057832   15773 notify.go:221] Checking for updates...
	I1206 08:40:19.058967   15773 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 08:40:19.060366   15773 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-5603/kubeconfig
	I1206 08:40:19.061573   15773 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-5603/.minikube
	I1206 08:40:19.065703   15773 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 08:40:19.066818   15773 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 08:40:19.068202   15773 config.go:182] Loaded profile config "functional-171063": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 08:40:19.068680   15773 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 08:40:19.100828   15773 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1206 08:40:19.101909   15773 start.go:309] selected driver: kvm2
	I1206 08:40:19.101928   15773 start.go:927] validating driver "kvm2" against &{Name:functional-171063 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:functional-171063 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 08:40:19.102020   15773 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 08:40:19.103864   15773 out.go:203] 
	W1206 08:40:19.104950   15773 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1206 08:40:19.106040   15773 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (20.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-171063 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-171063 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-s4575" [12ae6735-61b5-45e5-a064-29a0f6ce23e2] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-s4575" [12ae6735-61b5-45e5-a064-29a0f6ce23e2] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 20.007596471s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.67:31033
functional_test.go:1680: http://192.168.39.67:31033: success! body:
Request served by hello-node-connect-7d85dfc575-s4575

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.67:31033
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (20.43s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 ssh -n functional-171063 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 cp functional-171063:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3897108930/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 ssh -n functional-171063 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 ssh -n functional-171063 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (23.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-171063 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-ql5z4" [8f961666-5410-46a1-bffc-ce0456227f36] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-ql5z4" [8f961666-5410-46a1-bffc-ce0456227f36] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 21.003723421s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-171063 exec mysql-5bb876957f-ql5z4 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-171063 exec mysql-5bb876957f-ql5z4 -- mysql -ppassword -e "show databases;": exit status 1 (286.918276ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1206 08:40:17.872318    9552 retry.go:31] will retry after 727.233212ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-171063 exec mysql-5bb876957f-ql5z4 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-171063 exec mysql-5bb876957f-ql5z4 -- mysql -ppassword -e "show databases;": exit status 1 (135.223174ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1206 08:40:18.735357    9552 retry.go:31] will retry after 1.069072071s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-171063 exec mysql-5bb876957f-ql5z4 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (23.57s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/9552/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 ssh "sudo cat /etc/test/nested/copy/9552/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/9552.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 ssh "sudo cat /etc/ssl/certs/9552.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/9552.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 ssh "sudo cat /usr/share/ca-certificates/9552.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/95522.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 ssh "sudo cat /etc/ssl/certs/95522.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/95522.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 ssh "sudo cat /usr/share/ca-certificates/95522.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-171063 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-171063 ssh "sudo systemctl is-active docker": exit status 1 (182.269449ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-171063 ssh "sudo systemctl is-active containerd": exit status 1 (187.361034ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-171063 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-171063
localhost/kicbase/echo-server:functional-171063
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-171063 image ls --format short --alsologtostderr:
I1206 08:40:23.658015   16073 out.go:360] Setting OutFile to fd 1 ...
I1206 08:40:23.658263   16073 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 08:40:23.658273   16073 out.go:374] Setting ErrFile to fd 2...
I1206 08:40:23.658277   16073 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 08:40:23.658522   16073 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5603/.minikube/bin
I1206 08:40:23.659137   16073 config.go:182] Loaded profile config "functional-171063": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1206 08:40:23.659244   16073 config.go:182] Loaded profile config "functional-171063": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1206 08:40:23.661434   16073 ssh_runner.go:195] Run: systemctl --version
I1206 08:40:23.663615   16073 main.go:143] libmachine: domain functional-171063 has defined MAC address 52:54:00:06:43:38 in network mk-functional-171063
I1206 08:40:23.663981   16073 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:06:43:38", ip: ""} in network mk-functional-171063: {Iface:virbr1 ExpiryTime:2025-12-06 09:37:06 +0000 UTC Type:0 Mac:52:54:00:06:43:38 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:functional-171063 Clientid:01:52:54:00:06:43:38}
I1206 08:40:23.664006   16073 main.go:143] libmachine: domain functional-171063 has defined IP address 192.168.39.67 and MAC address 52:54:00:06:43:38 in network mk-functional-171063
I1206 08:40:23.664192   16073 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/machines/functional-171063/id_rsa Username:docker}
I1206 08:40:23.742069   16073 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-171063 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ a3e246e9556e9 │ 63.6MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.2            │ 01e8bacf0f500 │ 76MB   │
│ registry.k8s.io/kube-scheduler          │ v1.34.2            │ 88320b5498ff2 │ 53.8MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ localhost/minikube-local-cache-test     │ functional-171063  │ 7c41bbd51c18a │ 3.33kB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/kube-proxy              │ v1.34.2            │ 8aa150647e88a │ 73.1MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ localhost/my-image                      │ functional-171063  │ 142787919f0e9 │ 1.47MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.94MB │
│ localhost/kicbase/echo-server           │ functional-171063  │ 9056ab77afb8e │ 4.94MB │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.2            │ a5f569d49a979 │ 89MB   │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-171063 image ls --format table --alsologtostderr:
I1206 08:40:29.061027   16237 out.go:360] Setting OutFile to fd 1 ...
I1206 08:40:29.061122   16237 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 08:40:29.061132   16237 out.go:374] Setting ErrFile to fd 2...
I1206 08:40:29.061138   16237 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 08:40:29.061331   16237 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5603/.minikube/bin
I1206 08:40:29.061878   16237 config.go:182] Loaded profile config "functional-171063": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1206 08:40:29.061969   16237 config.go:182] Loaded profile config "functional-171063": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1206 08:40:29.064004   16237 ssh_runner.go:195] Run: systemctl --version
I1206 08:40:29.066322   16237 main.go:143] libmachine: domain functional-171063 has defined MAC address 52:54:00:06:43:38 in network mk-functional-171063
I1206 08:40:29.066720   16237 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:06:43:38", ip: ""} in network mk-functional-171063: {Iface:virbr1 ExpiryTime:2025-12-06 09:37:06 +0000 UTC Type:0 Mac:52:54:00:06:43:38 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:functional-171063 Clientid:01:52:54:00:06:43:38}
I1206 08:40:29.066741   16237 main.go:143] libmachine: domain functional-171063 has defined IP address 192.168.39.67 and MAC address 52:54:00:06:43:38 in network mk-functional-171063
I1206 08:40:29.066878   16237 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/machines/functional-171063/id_rsa Username:docker}
I1206 08:40:29.162714   16237 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-171063 image ls --format json --alsologtostderr:
[{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-171063"],"size":"4943877"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e
732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"7c41bbd51c18a87cb56c4183f2debd5ce47ed650396711cbb14ce8557921b273","repoDigests":["localhost/minikube-local-cache-test@sha256:f7d21b852ccfd905f0ed47d2e792cdc0ce4703a16a3d5f69cffb1cf069ce5c18"],"repoTags":["localhost/minikube-local-cache-test:functional-171063"],"size":"3330"},{"id":"142787919f0e92922cc32b65de35b7f8c51c37958a996ad192edf9acd6ce8323","repoDigests":["localhost/my-image@sha256:8450654628125a07383306cdc14fb962f4acd0b3bf48780f88116b511ab1dbd6"],"repoTags":["localhost/my-image:functional-171063"],"size":"1468600"},{"id":"a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85","repoDigests":["registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077","registry.k8s.io/kube-apiserver@sha256:f0e0dc00029af1a9258587ef181f17a9eb7605d3d69a72668f4f6709f72005fd"],"repoTags":
["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"89046001"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"547b7a21d031098799e6c4e558e0b1f19e8d47d6d6751e3d5ecc555926f4c7e5","repoDigests":["docker.io/library/e749c5824c8d5856a1caf62463ba65358fafb545783a5d2513d31ace0f5dfbaa-tmp@sha256:beb002a133f7e854b3b552ff15420209683bb6c2a4298d6722719112f0e55ce6"],"repoTags":[],"size":"1466018"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b
78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb","registry.k8s.io/kube-controller-manager@sha256:9eb769377f8fdeab9e1428194e2b7d19584b63a5fda8f2f406900ee7893c2f4e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"76004183"},{"id":"8aa150647e88a80f2e8c7bd5beb3b7af1209fb40
04a261e86b617f40849c6d45","repoDigests":["registry.k8s.io/kube-proxy@sha256:1512fa1bace72d9bcaa7471e364e972c60805474184840a707b6afa05bde3a74","registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"73145240"},{"id":"88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952","repoDigests":["registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6","registry.k8s.io/kube-scheduler@sha256:7a0dd12264041dec5dcbb44eeaad051d21560c6d9aa0051cc68ed281a4c26dda"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"53848919"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registr
y.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1f
aaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"63585106"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-171063 image ls --format json --alsologtostderr:
I1206 08:40:28.820547   16214 out.go:360] Setting OutFile to fd 1 ...
I1206 08:40:28.820844   16214 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 08:40:28.820857   16214 out.go:374] Setting ErrFile to fd 2...
I1206 08:40:28.820863   16214 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 08:40:28.821156   16214 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5603/.minikube/bin
I1206 08:40:28.821915   16214 config.go:182] Loaded profile config "functional-171063": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1206 08:40:28.822055   16214 config.go:182] Loaded profile config "functional-171063": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1206 08:40:28.824732   16214 ssh_runner.go:195] Run: systemctl --version
I1206 08:40:28.827527   16214 main.go:143] libmachine: domain functional-171063 has defined MAC address 52:54:00:06:43:38 in network mk-functional-171063
I1206 08:40:28.827999   16214 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:06:43:38", ip: ""} in network mk-functional-171063: {Iface:virbr1 ExpiryTime:2025-12-06 09:37:06 +0000 UTC Type:0 Mac:52:54:00:06:43:38 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:functional-171063 Clientid:01:52:54:00:06:43:38}
I1206 08:40:28.828034   16214 main.go:143] libmachine: domain functional-171063 has defined IP address 192.168.39.67 and MAC address 52:54:00:06:43:38 in network mk-functional-171063
I1206 08:40:28.828176   16214 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/machines/functional-171063/id_rsa Username:docker}
I1206 08:40:28.927721   16214 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-171063 image ls --format yaml --alsologtostderr:
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63585106"
- id: 8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45
repoDigests:
- registry.k8s.io/kube-proxy@sha256:1512fa1bace72d9bcaa7471e364e972c60805474184840a707b6afa05bde3a74
- registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "73145240"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-171063
size: "4943877"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077
- registry.k8s.io/kube-apiserver@sha256:f0e0dc00029af1a9258587ef181f17a9eb7605d3d69a72668f4f6709f72005fd
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "89046001"
- id: 01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb
- registry.k8s.io/kube-controller-manager@sha256:9eb769377f8fdeab9e1428194e2b7d19584b63a5fda8f2f406900ee7893c2f4e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "76004183"
- id: 7c41bbd51c18a87cb56c4183f2debd5ce47ed650396711cbb14ce8557921b273
repoDigests:
- localhost/minikube-local-cache-test@sha256:f7d21b852ccfd905f0ed47d2e792cdc0ce4703a16a3d5f69cffb1cf069ce5c18
repoTags:
- localhost/minikube-local-cache-test:functional-171063
size: "3330"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6
- registry.k8s.io/kube-scheduler@sha256:7a0dd12264041dec5dcbb44eeaad051d21560c6d9aa0051cc68ed281a4c26dda
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "53848919"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-171063 image ls --format yaml --alsologtostderr:
I1206 08:40:23.843163   16084 out.go:360] Setting OutFile to fd 1 ...
I1206 08:40:23.843269   16084 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 08:40:23.843278   16084 out.go:374] Setting ErrFile to fd 2...
I1206 08:40:23.843282   16084 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 08:40:23.843439   16084 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5603/.minikube/bin
I1206 08:40:23.843940   16084 config.go:182] Loaded profile config "functional-171063": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1206 08:40:23.844039   16084 config.go:182] Loaded profile config "functional-171063": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1206 08:40:23.846071   16084 ssh_runner.go:195] Run: systemctl --version
I1206 08:40:23.848381   16084 main.go:143] libmachine: domain functional-171063 has defined MAC address 52:54:00:06:43:38 in network mk-functional-171063
I1206 08:40:23.848772   16084 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:06:43:38", ip: ""} in network mk-functional-171063: {Iface:virbr1 ExpiryTime:2025-12-06 09:37:06 +0000 UTC Type:0 Mac:52:54:00:06:43:38 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:functional-171063 Clientid:01:52:54:00:06:43:38}
I1206 08:40:23.848794   16084 main.go:143] libmachine: domain functional-171063 has defined IP address 192.168.39.67 and MAC address 52:54:00:06:43:38 in network mk-functional-171063
I1206 08:40:23.848927   16084 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/machines/functional-171063/id_rsa Username:docker}
I1206 08:40:23.929537   16084 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-171063 ssh pgrep buildkitd: exit status 1 (146.442341ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 image build -t localhost/my-image:functional-171063 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-171063 image build -t localhost/my-image:functional-171063 testdata/build --alsologtostderr: (4.410376285s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-171063 image build -t localhost/my-image:functional-171063 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 547b7a21d03
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-171063
--> 142787919f0
Successfully tagged localhost/my-image:functional-171063
142787919f0e92922cc32b65de35b7f8c51c37958a996ad192edf9acd6ce8323
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-171063 image build -t localhost/my-image:functional-171063 testdata/build --alsologtostderr:
I1206 08:40:24.170123   16106 out.go:360] Setting OutFile to fd 1 ...
I1206 08:40:24.170245   16106 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 08:40:24.170253   16106 out.go:374] Setting ErrFile to fd 2...
I1206 08:40:24.170257   16106 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 08:40:24.170414   16106 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5603/.minikube/bin
I1206 08:40:24.170937   16106 config.go:182] Loaded profile config "functional-171063": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1206 08:40:24.171531   16106 config.go:182] Loaded profile config "functional-171063": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1206 08:40:24.173429   16106 ssh_runner.go:195] Run: systemctl --version
I1206 08:40:24.175439   16106 main.go:143] libmachine: domain functional-171063 has defined MAC address 52:54:00:06:43:38 in network mk-functional-171063
I1206 08:40:24.175790   16106 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:06:43:38", ip: ""} in network mk-functional-171063: {Iface:virbr1 ExpiryTime:2025-12-06 09:37:06 +0000 UTC Type:0 Mac:52:54:00:06:43:38 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:functional-171063 Clientid:01:52:54:00:06:43:38}
I1206 08:40:24.175813   16106 main.go:143] libmachine: domain functional-171063 has defined IP address 192.168.39.67 and MAC address 52:54:00:06:43:38 in network mk-functional-171063
I1206 08:40:24.175939   16106 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/machines/functional-171063/id_rsa Username:docker}
I1206 08:40:24.254834   16106 build_images.go:162] Building image from path: /tmp/build.2190481863.tar
I1206 08:40:24.254894   16106 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1206 08:40:24.274754   16106 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2190481863.tar
I1206 08:40:24.280559   16106 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2190481863.tar: stat -c "%s %y" /var/lib/minikube/build/build.2190481863.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2190481863.tar': No such file or directory
I1206 08:40:24.280605   16106 ssh_runner.go:362] scp /tmp/build.2190481863.tar --> /var/lib/minikube/build/build.2190481863.tar (3072 bytes)
I1206 08:40:24.323509   16106 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2190481863
I1206 08:40:24.340571   16106 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2190481863 -xf /var/lib/minikube/build/build.2190481863.tar
I1206 08:40:24.360498   16106 crio.go:315] Building image: /var/lib/minikube/build/build.2190481863
I1206 08:40:24.360609   16106 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-171063 /var/lib/minikube/build/build.2190481863 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1206 08:40:28.451782   16106 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-171063 /var/lib/minikube/build/build.2190481863 --cgroup-manager=cgroupfs: (4.091143375s)
I1206 08:40:28.451873   16106 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2190481863
I1206 08:40:28.486951   16106 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2190481863.tar
I1206 08:40:28.519768   16106 build_images.go:218] Built localhost/my-image:functional-171063 from /tmp/build.2190481863.tar
I1206 08:40:28.519807   16106 build_images.go:134] succeeded building to: functional-171063
I1206 08:40:28.519814   16106 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.948465077s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-171063
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.97s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "249.107144ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "63.46001ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "248.878204ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "65.434805ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 image load --daemon kicbase/echo-server:functional-171063 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-171063 image load --daemon kicbase/echo-server:functional-171063 --alsologtostderr: (1.272050325s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 image load --daemon kicbase/echo-server:functional-171063 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-171063
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 image load --daemon kicbase/echo-server:functional-171063 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (7.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 image save kicbase/echo-server:functional-171063 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:395: (dbg) Done: out/minikube-linux-amd64 -p functional-171063 image save kicbase/echo-server:functional-171063 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (7.543818466s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (7.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 image rm kicbase/echo-server:functional-171063 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-171063
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 image save --daemon kicbase/echo-server:functional-171063 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-171063
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-171063 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-171063 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-9xzqr" [cb5c6960-1721-4e04-828a-4979165b6abd] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-9xzqr" [cb5c6960-1721-4e04-828a-4979165b6abd] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.006670965s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.21s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-171063 /tmp/TestFunctionalparallelMountCmdany-port104163894/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765010419111714730" to /tmp/TestFunctionalparallelMountCmdany-port104163894/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765010419111714730" to /tmp/TestFunctionalparallelMountCmdany-port104163894/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765010419111714730" to /tmp/TestFunctionalparallelMountCmdany-port104163894/001/test-1765010419111714730
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-171063 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (157.387433ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1206 08:40:19.269442    9552 retry.go:31] will retry after 475.622011ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  6 08:40 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  6 08:40 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  6 08:40 test-1765010419111714730
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 ssh cat /mount-9p/test-1765010419111714730
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-171063 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [0e7b1705-1df4-4cca-b820-36c9e4bb31ff] Pending
helpers_test.go:352: "busybox-mount" [0e7b1705-1df4-4cca-b820-36c9e4bb31ff] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [0e7b1705-1df4-4cca-b820-36c9e4bb31ff] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [0e7b1705-1df4-4cca-b820-36c9e4bb31ff] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.005674686s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-171063 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-171063 /tmp/TestFunctionalparallelMountCmdany-port104163894/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 service list -o json
functional_test.go:1504: Took "420.802018ms" to run "out/minikube-linux-amd64 -p functional-171063 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.67:30846
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.67:30846
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-171063 /tmp/TestFunctionalparallelMountCmdspecific-port3725818201/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-171063 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (214.893371ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1206 08:40:28.451580    9552 retry.go:31] will retry after 284.196165ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-171063 /tmp/TestFunctionalparallelMountCmdspecific-port3725818201/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-171063 ssh "sudo umount -f /mount-9p": exit status 1 (184.645493ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-171063 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-171063 /tmp/TestFunctionalparallelMountCmdspecific-port3725818201/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-171063 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3991001899/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-171063 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3991001899/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-171063 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3991001899/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-171063 ssh "findmnt -T" /mount1: exit status 1 (212.256756ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1206 08:40:29.738678    9552 retry.go:31] will retry after 493.391555ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-171063 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-171063 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-171063 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3991001899/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-171063 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3991001899/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-171063 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3991001899/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
2025/12/06 08:40:34 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.29s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-171063
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-171063
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-171063
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22049-5603/.minikube/files/etc/test/nested/copy/9552/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (72.56s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-118298 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
E1206 08:46:25.409424    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-118298 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (1m12.560178447s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (72.56s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (52.96s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1206 08:47:21.287109    9552 config.go:182] Loaded profile config "functional-118298": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-118298 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-118298 --alsologtostderr -v=8: (52.959296878s)
functional_test.go:678: soft start took 52.959717799s for "functional-118298" cluster.
I1206 08:48:14.246736    9552 config.go:182] Loaded profile config "functional-118298": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (52.96s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-118298 get po -A
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-118298 cache add registry.k8s.io/pause:3.1: (1.011803503s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-118298 cache add registry.k8s.io/pause:3.3: (1.104483891s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-118298 cache add registry.k8s.io/pause:latest: (1.12638801s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (2.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-118298 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach2732318098/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 cache add minikube-local-cache-test:functional-118298
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-118298 cache add minikube-local-cache-test:functional-118298: (1.913974845s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 cache delete minikube-local-cache-test:functional-118298
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-118298
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (2.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.55s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-118298 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (175.732499ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.55s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 kubectl -- --context functional-118298 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-118298 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (34.78s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-118298 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-118298 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (34.778334947s)
functional_test.go:776: restart took 34.778455446s for "functional-118298" cluster.
I1206 08:48:56.792622    9552 config.go:182] Loaded profile config "functional-118298": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (34.78s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-118298 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-118298 logs: (1.287561902s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs957748537/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-118298 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs957748537/001/logs.txt: (1.267505482s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (4.82s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-118298 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-118298
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-118298: exit status 115 (246.118906ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL             │
	├───────────┼─────────────┼─────────────┼────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.15:30300 │
	└───────────┴─────────────┴─────────────┴────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-118298 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-118298 delete -f testdata/invalidsvc.yaml: (1.354812734s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (4.82s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-118298 config get cpus: exit status 14 (61.442802ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-118298 config get cpus: exit status 14 (67.179395ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (20.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-118298 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-118298 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 19781: os: process already finished
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (20.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-118298 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-118298 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (111.226093ms)

                                                
                                                
-- stdout --
	* [functional-118298] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22049
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22049-5603/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-5603/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 08:49:15.728199   19664 out.go:360] Setting OutFile to fd 1 ...
	I1206 08:49:15.728321   19664 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:49:15.728332   19664 out.go:374] Setting ErrFile to fd 2...
	I1206 08:49:15.728337   19664 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:49:15.728578   19664 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5603/.minikube/bin
	I1206 08:49:15.729047   19664 out.go:368] Setting JSON to false
	I1206 08:49:15.729877   19664 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":1898,"bootTime":1765009058,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 08:49:15.729926   19664 start.go:143] virtualization: kvm guest
	I1206 08:49:15.731234   19664 out.go:179] * [functional-118298] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 08:49:15.732611   19664 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 08:49:15.732602   19664 notify.go:221] Checking for updates...
	I1206 08:49:15.734656   19664 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 08:49:15.735754   19664 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-5603/kubeconfig
	I1206 08:49:15.736818   19664 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-5603/.minikube
	I1206 08:49:15.738026   19664 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 08:49:15.739142   19664 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 08:49:15.740845   19664 config.go:182] Loaded profile config "functional-118298": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 08:49:15.741536   19664 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 08:49:15.774523   19664 out.go:179] * Using the kvm2 driver based on existing profile
	I1206 08:49:15.775662   19664 start.go:309] selected driver: kvm2
	I1206 08:49:15.775675   19664 start.go:927] validating driver "kvm2" against &{Name:functional-118298 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-118298 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.15 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 08:49:15.775777   19664 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 08:49:15.777511   19664 out.go:203] 
	W1206 08:49:15.778736   19664 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1206 08:49:15.779801   19664 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-118298 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-118298 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-118298 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (102.486094ms)

                                                
                                                
-- stdout --
	* [functional-118298] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22049
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22049-5603/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-5603/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 08:49:12.966355   19573 out.go:360] Setting OutFile to fd 1 ...
	I1206 08:49:12.966435   19573 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:49:12.966439   19573 out.go:374] Setting ErrFile to fd 2...
	I1206 08:49:12.966443   19573 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:49:12.966760   19573 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5603/.minikube/bin
	I1206 08:49:12.967161   19573 out.go:368] Setting JSON to false
	I1206 08:49:12.967934   19573 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":1895,"bootTime":1765009058,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 08:49:12.967982   19573 start.go:143] virtualization: kvm guest
	I1206 08:49:12.969813   19573 out.go:179] * [functional-118298] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1206 08:49:12.970848   19573 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 08:49:12.970848   19573 notify.go:221] Checking for updates...
	I1206 08:49:12.972738   19573 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 08:49:12.973874   19573 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-5603/kubeconfig
	I1206 08:49:12.974858   19573 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-5603/.minikube
	I1206 08:49:12.975990   19573 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 08:49:12.977025   19573 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 08:49:12.978366   19573 config.go:182] Loaded profile config "functional-118298": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 08:49:12.978870   19573 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 08:49:13.007377   19573 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1206 08:49:13.008409   19573 start.go:309] selected driver: kvm2
	I1206 08:49:13.008422   19573 start.go:927] validating driver "kvm2" against &{Name:functional-118298 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-118298 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.15 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 08:49:13.008511   19573 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 08:49:13.010350   19573 out.go:203] 
	W1206 08:49:13.011298   19573 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1206 08:49:13.012307   19573 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (0.74s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 status -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (0.74s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (10.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-118298 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-118298 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-9f67c86d4-9btdf" [23074611-b5c6-4bec-b988-ca9e38c6ec4a] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-9f67c86d4-9btdf" [23074611-b5c6-4bec-b988-ca9e38c6ec4a] Running
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.004146021s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.15:31648
functional_test.go:1680: http://192.168.39.15:31648: success! body:
Request served by hello-node-connect-9f67c86d4-9btdf

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.15:31648
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (10.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (50.48s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [1c2d6971-99b9-4f62-a4c2-6e20e9743732] Running
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.005600636s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-118298 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-118298 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-118298 get pvc myclaim -o=json
I1206 08:49:10.922988    9552 retry.go:31] will retry after 1.047794515s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:a83ba30a-2cba-4073-9800-31c99cec73ba ResourceVersion:747 Generation:0 CreationTimestamp:2025-12-06 08:49:10 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName:pvc-a83ba30a-2cba-4073-9800-31c99cec73ba StorageClassName:0xc001f9b120 VolumeMode:0xc001f9b130 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-118298 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-118298 apply -f testdata/storage-provisioner/pod.yaml
I1206 08:49:12.166530    9552 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [aed85126-1518-4863-97ea-c78627130643] Pending
helpers_test.go:352: "sp-pod" [aed85126-1518-4863-97ea-c78627130643] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [aed85126-1518-4863-97ea-c78627130643] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.006210169s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-118298 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-118298 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-118298 delete -f testdata/storage-provisioner/pod.yaml: (1.196505917s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-118298 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [9caaba8e-1dc7-44d4-87ea-423261d29f37] Pending
helpers_test.go:352: "sp-pod" [9caaba8e-1dc7-44d4-87ea-423261d29f37] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [9caaba8e-1dc7-44d4-87ea-423261d29f37] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 26.305039331s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-118298 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (50.48s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.34s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.34s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 ssh -n functional-118298 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 cp functional-118298:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp1266692705/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 ssh -n functional-118298 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 ssh -n functional-118298 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (36.72s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-118298 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-844cf969f6-rsbd2" [e18067e2-6dbe-494b-83a6-8dc2cdb0f9b2] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-844cf969f6-rsbd2" [e18067e2-6dbe-494b-83a6-8dc2cdb0f9b2] Running
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: app=mysql healthy within 33.005210916s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-118298 exec mysql-844cf969f6-rsbd2 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-118298 exec mysql-844cf969f6-rsbd2 -- mysql -ppassword -e "show databases;": exit status 1 (184.303146ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1206 08:49:53.122833    9552 retry.go:31] will retry after 1.084953411s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-118298 exec mysql-844cf969f6-rsbd2 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-118298 exec mysql-844cf969f6-rsbd2 -- mysql -ppassword -e "show databases;": exit status 1 (119.779644ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1206 08:49:54.328763    9552 retry.go:31] will retry after 1.712054556s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-118298 exec mysql-844cf969f6-rsbd2 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (36.72s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/9552/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 ssh "sudo cat /etc/test/nested/copy/9552/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/9552.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 ssh "sudo cat /etc/ssl/certs/9552.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/9552.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 ssh "sudo cat /usr/share/ca-certificates/9552.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/95522.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 ssh "sudo cat /etc/ssl/certs/95522.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/95522.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 ssh "sudo cat /usr/share/ca-certificates/95522.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-118298 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.38s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-118298 ssh "sudo systemctl is-active docker": exit status 1 (199.924028ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-118298 ssh "sudo systemctl is-active containerd": exit status 1 (184.425709ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.38s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.61s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.61s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-118298 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
localhost/minikube-local-cache-test:functional-118298
localhost/kicbase/echo-server:functional-118298
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-118298 image ls --format short --alsologtostderr:
I1206 08:49:31.521891   20137 out.go:360] Setting OutFile to fd 1 ...
I1206 08:49:31.522108   20137 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 08:49:31.522115   20137 out.go:374] Setting ErrFile to fd 2...
I1206 08:49:31.522120   20137 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 08:49:31.522349   20137 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5603/.minikube/bin
I1206 08:49:31.522936   20137 config.go:182] Loaded profile config "functional-118298": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1206 08:49:31.523029   20137 config.go:182] Loaded profile config "functional-118298": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1206 08:49:31.525068   20137 ssh_runner.go:195] Run: systemctl --version
I1206 08:49:31.527640   20137 main.go:143] libmachine: domain functional-118298 has defined MAC address 52:54:00:73:78:a4 in network mk-functional-118298
I1206 08:49:31.528047   20137 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:73:78:a4", ip: ""} in network mk-functional-118298: {Iface:virbr1 ExpiryTime:2025-12-06 09:46:24 +0000 UTC Type:0 Mac:52:54:00:73:78:a4 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:functional-118298 Clientid:01:52:54:00:73:78:a4}
I1206 08:49:31.528069   20137 main.go:143] libmachine: domain functional-118298 has defined IP address 192.168.39.15 and MAC address 52:54:00:73:78:a4 in network mk-functional-118298
I1206 08:49:31.528223   20137 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/machines/functional-118298/id_rsa Username:docker}
I1206 08:49:31.637192   20137 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-118298 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-apiserver          │ v1.35.0-beta.0     │ aa9d02839d8de │ 90.8MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/coredns/coredns         │ v1.13.1            │ aa5e3ebc0dfed │ 79.2MB │
│ registry.k8s.io/kube-controller-manager │ v1.35.0-beta.0     │ 45f3cc72d235f │ 76.9MB │
│ registry.k8s.io/kube-proxy              │ v1.35.0-beta.0     │ 8a4ded35a3eb1 │ 72MB   │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.94MB │
│ localhost/kicbase/echo-server           │ functional-118298  │ 9056ab77afb8e │ 4.94MB │
│ docker.io/library/nginx                 │ latest             │ 60adc2e137e75 │ 155MB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ a3e246e9556e9 │ 63.6MB │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ registry.k8s.io/kube-scheduler          │ v1.35.0-beta.0     │ 7bb6219ddab95 │ 52.7MB │
│ localhost/minikube-local-cache-test     │ functional-118298  │ 7c41bbd51c18a │ 3.33kB │
│ localhost/my-image                      │ functional-118298  │ 924e904a2fedc │ 1.47MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-118298 image ls --format table --alsologtostderr:
I1206 08:49:36.358860   20219 out.go:360] Setting OutFile to fd 1 ...
I1206 08:49:36.359102   20219 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 08:49:36.359110   20219 out.go:374] Setting ErrFile to fd 2...
I1206 08:49:36.359114   20219 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 08:49:36.359303   20219 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5603/.minikube/bin
I1206 08:49:36.359812   20219 config.go:182] Loaded profile config "functional-118298": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1206 08:49:36.359902   20219 config.go:182] Loaded profile config "functional-118298": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1206 08:49:36.361857   20219 ssh_runner.go:195] Run: systemctl --version
I1206 08:49:36.363891   20219 main.go:143] libmachine: domain functional-118298 has defined MAC address 52:54:00:73:78:a4 in network mk-functional-118298
I1206 08:49:36.364227   20219 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:73:78:a4", ip: ""} in network mk-functional-118298: {Iface:virbr1 ExpiryTime:2025-12-06 09:46:24 +0000 UTC Type:0 Mac:52:54:00:73:78:a4 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:functional-118298 Clientid:01:52:54:00:73:78:a4}
I1206 08:49:36.364250   20219 main.go:143] libmachine: domain functional-118298 has defined IP address 192.168.39.15 and MAC address 52:54:00:73:78:a4 in network mk-functional-118298
I1206 08:49:36.364409   20219 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/machines/functional-118298/id_rsa Username:docker}
I1206 08:49:36.448460   20219 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-118298 image ls --format json --alsologtostderr:
[{"id":"223e63569793b775294dc8724fec26dc874e010993b5dedb8f72e5db3429f437","repoDigests":["docker.io/library/14a80196e214c97d25945072518dd388caabeb7b4a3511ae9aeb706312ea7a04-tmp@sha256:dd700b016cac5ba0de80a8bc8cd63115e3c4e2c755955a5092f7aa89321f459a"],"repoTags":[],"size":"1466018"},{"id":"aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":["registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7","registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"79193994"},{"id":"45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d","registry.k8s.io/kube-controller-manager@sha256:ca8b699e445178c1fc4a8f31245d6bd7bd97192cc7b43baa2360522e09b55581"],"repoTags":["registry.k8s.io/kube-control
ler-manager:v1.35.0-beta.0"],"size":"76872535"},{"id":"8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810","repoDigests":["registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a","registry.k8s.io/kube-proxy@sha256:70a55889ba3d6b048529c8edae375ce2f20d1204f3bbcacd24e617abe8888b82"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-beta.0"],"size":"71977881"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc7
7f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-118298"],"size":"4943877"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha2
56:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"7c41bbd51c18a87cb56c4183f2debd5ce47ed650396711cbb14ce8557921b273","repoDigests":["localhost/minikube-local-cache-test@sha256:f7d21b852ccfd905f0ed47d2e792cdc0ce4703a16a3d5f69cffb1cf069ce5c18"],"repoTags":["localhost/minikube-local-cache-test:functional-118298"],"size":"3330"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s
.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"63585106"},{"id":"7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46","repoDigests":["registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6","registry.k8s.io/kube-scheduler@sha256:bb3d10b07de89c1e36a78794573fdbb7939a465d235a5bd164bae43aec22ee5b"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"52747095"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64d
cc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42","docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541"],"repoTags":["docker.io/library/nginx:latest"],"size":"155491845"},{"id":"924e904a2fedcfcf2f3f4e5f808649ed30bf8659a19a39cbe774005d1a4a8025","repoDigests":["localhost/my-image@sha256:aa48cd6824b87afcd4e79c0ef81f53a67cce621fb2e1c209742f5bcfd22e20d2"],"repoTags":["localhost/my-image:functional-118298"],"size":"1468599"},{"id":"aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b","repoDigests":["registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58","registry.k8s.io/
kube-apiserver@sha256:c95487a138f982d925eb8c59c7fc40761c58af445463ac4df872aee36c5e999c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"90819569"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id"
:"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-118298 image ls --format json --alsologtostderr:
I1206 08:49:36.301456   20208 out.go:360] Setting OutFile to fd 1 ...
I1206 08:49:36.301692   20208 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 08:49:36.301700   20208 out.go:374] Setting ErrFile to fd 2...
I1206 08:49:36.301704   20208 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 08:49:36.301896   20208 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5603/.minikube/bin
I1206 08:49:36.302361   20208 config.go:182] Loaded profile config "functional-118298": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1206 08:49:36.302444   20208 config.go:182] Loaded profile config "functional-118298": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1206 08:49:36.304233   20208 ssh_runner.go:195] Run: systemctl --version
I1206 08:49:36.306308   20208 main.go:143] libmachine: domain functional-118298 has defined MAC address 52:54:00:73:78:a4 in network mk-functional-118298
I1206 08:49:36.306775   20208 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:73:78:a4", ip: ""} in network mk-functional-118298: {Iface:virbr1 ExpiryTime:2025-12-06 09:46:24 +0000 UTC Type:0 Mac:52:54:00:73:78:a4 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:functional-118298 Clientid:01:52:54:00:73:78:a4}
I1206 08:49:36.306812   20208 main.go:143] libmachine: domain functional-118298 has defined IP address 192.168.39.15 and MAC address 52:54:00:73:78:a4 in network mk-functional-118298
I1206 08:49:36.307027   20208 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/machines/functional-118298/id_rsa Username:docker}
I1206 08:49:36.391172   20208 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-118298 image ls --format yaml --alsologtostderr:
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 7c41bbd51c18a87cb56c4183f2debd5ce47ed650396711cbb14ce8557921b273
repoDigests:
- localhost/minikube-local-cache-test@sha256:f7d21b852ccfd905f0ed47d2e792cdc0ce4703a16a3d5f69cffb1cf069ce5c18
repoTags:
- localhost/minikube-local-cache-test:functional-118298
size: "3330"
- id: aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "79193994"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-118298
size: "4943877"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63585106"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58
- registry.k8s.io/kube-apiserver@sha256:c95487a138f982d925eb8c59c7fc40761c58af445463ac4df872aee36c5e999c
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "90819569"
- id: 8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a
- registry.k8s.io/kube-proxy@sha256:70a55889ba3d6b048529c8edae375ce2f20d1204f3bbcacd24e617abe8888b82
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "71977881"
- id: 7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6
- registry.k8s.io/kube-scheduler@sha256:bb3d10b07de89c1e36a78794573fdbb7939a465d235a5bd164bae43aec22ee5b
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "52747095"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
- docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541
repoTags:
- docker.io/library/nginx:latest
size: "155491845"
- id: 45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d
- registry.k8s.io/kube-controller-manager@sha256:ca8b699e445178c1fc4a8f31245d6bd7bd97192cc7b43baa2360522e09b55581
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "76872535"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-118298 image ls --format yaml --alsologtostderr:
I1206 08:49:31.793034   20147 out.go:360] Setting OutFile to fd 1 ...
I1206 08:49:31.793256   20147 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 08:49:31.793264   20147 out.go:374] Setting ErrFile to fd 2...
I1206 08:49:31.793268   20147 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 08:49:31.793426   20147 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5603/.minikube/bin
I1206 08:49:31.793912   20147 config.go:182] Loaded profile config "functional-118298": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1206 08:49:31.794007   20147 config.go:182] Loaded profile config "functional-118298": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1206 08:49:31.796020   20147 ssh_runner.go:195] Run: systemctl --version
I1206 08:49:31.798434   20147 main.go:143] libmachine: domain functional-118298 has defined MAC address 52:54:00:73:78:a4 in network mk-functional-118298
I1206 08:49:31.798892   20147 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:73:78:a4", ip: ""} in network mk-functional-118298: {Iface:virbr1 ExpiryTime:2025-12-06 09:46:24 +0000 UTC Type:0 Mac:52:54:00:73:78:a4 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:functional-118298 Clientid:01:52:54:00:73:78:a4}
I1206 08:49:31.798919   20147 main.go:143] libmachine: domain functional-118298 has defined IP address 192.168.39.15 and MAC address 52:54:00:73:78:a4 in network mk-functional-118298
I1206 08:49:31.799050   20147 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/machines/functional-118298/id_rsa Username:docker}
I1206 08:49:31.885480   20147 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (4.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-118298 ssh pgrep buildkitd: exit status 1 (155.735896ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 image build -t localhost/my-image:functional-118298 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-118298 image build -t localhost/my-image:functional-118298 testdata/build --alsologtostderr: (3.943297708s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-118298 image build -t localhost/my-image:functional-118298 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 223e6356979
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-118298
--> 924e904a2fe
Successfully tagged localhost/my-image:functional-118298
924e904a2fedcfcf2f3f4e5f808649ed30bf8659a19a39cbe774005d1a4a8025
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-118298 image build -t localhost/my-image:functional-118298 testdata/build --alsologtostderr:
I1206 08:49:32.140744   20169 out.go:360] Setting OutFile to fd 1 ...
I1206 08:49:32.140996   20169 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 08:49:32.141006   20169 out.go:374] Setting ErrFile to fd 2...
I1206 08:49:32.141010   20169 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 08:49:32.141198   20169 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5603/.minikube/bin
I1206 08:49:32.141724   20169 config.go:182] Loaded profile config "functional-118298": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1206 08:49:32.142347   20169 config.go:182] Loaded profile config "functional-118298": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1206 08:49:32.144452   20169 ssh_runner.go:195] Run: systemctl --version
I1206 08:49:32.146893   20169 main.go:143] libmachine: domain functional-118298 has defined MAC address 52:54:00:73:78:a4 in network mk-functional-118298
I1206 08:49:32.147374   20169 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:73:78:a4", ip: ""} in network mk-functional-118298: {Iface:virbr1 ExpiryTime:2025-12-06 09:46:24 +0000 UTC Type:0 Mac:52:54:00:73:78:a4 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:functional-118298 Clientid:01:52:54:00:73:78:a4}
I1206 08:49:32.147413   20169 main.go:143] libmachine: domain functional-118298 has defined IP address 192.168.39.15 and MAC address 52:54:00:73:78:a4 in network mk-functional-118298
I1206 08:49:32.147559   20169 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/machines/functional-118298/id_rsa Username:docker}
I1206 08:49:32.232659   20169 build_images.go:162] Building image from path: /tmp/build.1608184855.tar
I1206 08:49:32.232726   20169 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1206 08:49:32.245699   20169 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1608184855.tar
I1206 08:49:32.255021   20169 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1608184855.tar: stat -c "%s %y" /var/lib/minikube/build/build.1608184855.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1608184855.tar': No such file or directory
I1206 08:49:32.255048   20169 ssh_runner.go:362] scp /tmp/build.1608184855.tar --> /var/lib/minikube/build/build.1608184855.tar (3072 bytes)
I1206 08:49:32.310163   20169 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1608184855
I1206 08:49:32.326213   20169 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1608184855 -xf /var/lib/minikube/build/build.1608184855.tar
I1206 08:49:32.338367   20169 crio.go:315] Building image: /var/lib/minikube/build/build.1608184855
I1206 08:49:32.338423   20169 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-118298 /var/lib/minikube/build/build.1608184855 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1206 08:49:35.998004   20169 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-118298 /var/lib/minikube/build/build.1608184855 --cgroup-manager=cgroupfs: (3.659546061s)
I1206 08:49:35.998074   20169 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1608184855
I1206 08:49:36.013086   20169 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1608184855.tar
I1206 08:49:36.025563   20169 build_images.go:218] Built localhost/my-image:functional-118298 from /tmp/build.1608184855.tar
I1206 08:49:36.025600   20169 build_images.go:134] succeeded building to: functional-118298
I1206 08:49:36.025608   20169 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 image ls
2025/12/06 08:49:36 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (4.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.92s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-118298
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.92s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 image load --daemon kicbase/echo-server:functional-118298 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-118298 image load --daemon kicbase/echo-server:functional-118298 --alsologtostderr: (1.321631482s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (10.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-118298 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-118298 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-5758569b79-q2nds" [a0fd8d25-33f2-43dc-b63c-7528a48cb07d] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-5758569b79-q2nds" [a0fd8d25-33f2-43dc-b63c-7528a48cb07d] Running
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.006578831s
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (10.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.83s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 image load --daemon kicbase/echo-server:functional-118298 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.83s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.76s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-118298
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 image load --daemon kicbase/echo-server:functional-118298 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.76s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.52s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 image save kicbase/echo-server:functional-118298 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.52s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 image rm kicbase/echo-server:functional-118298 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.67s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.67s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-118298
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 image save --daemon kicbase/echo-server:functional-118298 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-118298
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.53s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "268.494167ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "59.836502ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "249.150498ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "56.425263ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (14.84s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-118298 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1192958726/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765010953017925616" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1192958726/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765010953017925616" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1192958726/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765010953017925616" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1192958726/001/test-1765010953017925616
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-118298 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (159.157946ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1206 08:49:13.177460    9552 retry.go:31] will retry after 284.345308ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  6 08:49 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  6 08:49 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  6 08:49 test-1765010953017925616
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 ssh cat /mount-9p/test-1765010953017925616
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-118298 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [927c5f70-8594-4ebc-83b4-aa88a26adfdb] Pending
helpers_test.go:352: "busybox-mount" [927c5f70-8594-4ebc-83b4-aa88a26adfdb] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [927c5f70-8594-4ebc-83b4-aa88a26adfdb] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [927c5f70-8594-4ebc-83b4-aa88a26adfdb] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 13.003912369s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-118298 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-118298 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1192958726/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (14.84s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 service list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 service list -o json
functional_test.go:1504: Took "269.16304ms" to run "out/minikube-linux-amd64 -p functional-118298 service list -o json"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.15:32285
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 service hello-node --url --format={{.IP}}
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.15:32285
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.53s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.56s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-118298 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo696855144/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-118298 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (204.982264ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1206 08:49:28.059556    9552 retry.go:31] will retry after 560.957343ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 ssh "findmnt -T /mount-9p | grep 9p"
I1206 08:49:28.712417    9552 detect.go:223] nested VM detected
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-118298 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo696855144/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-118298 ssh "sudo umount -f /mount-9p": exit status 1 (177.280796ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-118298 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-118298 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo696855144/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.56s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.36s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-118298 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2984392988/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-118298 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2984392988/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-118298 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2984392988/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-118298 ssh "findmnt -T" /mount1: exit status 1 (203.292197ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1206 08:49:29.623403    9552 retry.go:31] will retry after 532.977069ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-118298 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-118298 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-118298 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2984392988/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-118298 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2984392988/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-118298 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2984392988/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.36s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-118298
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-118298
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-118298
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (202.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E1206 08:49:57.227044    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/functional-171063/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:49:57.868429    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/functional-171063/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:49:59.150007    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/functional-171063/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:50:01.711771    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/functional-171063/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:50:06.833725    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/functional-171063/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:50:17.075946    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/functional-171063/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:50:37.557601    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/functional-171063/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:51:18.520661    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/functional-171063/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:51:25.408957    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:52:40.442995    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/functional-171063/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:52:48.475901    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-512692 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (3m21.776002553s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (202.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-512692 kubectl -- rollout status deployment/busybox: (5.583059302s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 kubectl -- exec busybox-7b57f96db7-2268w -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 kubectl -- exec busybox-7b57f96db7-d4wl5 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 kubectl -- exec busybox-7b57f96db7-hmp7v -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 kubectl -- exec busybox-7b57f96db7-2268w -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 kubectl -- exec busybox-7b57f96db7-d4wl5 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 kubectl -- exec busybox-7b57f96db7-hmp7v -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 kubectl -- exec busybox-7b57f96db7-2268w -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 kubectl -- exec busybox-7b57f96db7-d4wl5 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 kubectl -- exec busybox-7b57f96db7-hmp7v -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 kubectl -- exec busybox-7b57f96db7-2268w -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 kubectl -- exec busybox-7b57f96db7-2268w -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 kubectl -- exec busybox-7b57f96db7-d4wl5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 kubectl -- exec busybox-7b57f96db7-d4wl5 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 kubectl -- exec busybox-7b57f96db7-hmp7v -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 kubectl -- exec busybox-7b57f96db7-hmp7v -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (44.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 node add --alsologtostderr -v 5
E1206 08:54:04.656225    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/functional-118298/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:54:04.662599    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/functional-118298/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:54:04.674032    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/functional-118298/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:54:04.695605    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/functional-118298/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:54:04.737052    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/functional-118298/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:54:04.818547    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/functional-118298/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:54:04.980106    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/functional-118298/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:54:05.301797    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/functional-118298/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:54:05.944008    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/functional-118298/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:54:07.225700    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/functional-118298/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:54:09.787905    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/functional-118298/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-512692 node add --alsologtostderr -v 5: (43.94630719s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (44.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-512692 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (10.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 cp testdata/cp-test.txt ha-512692:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 ssh -n ha-512692 "sudo cat /home/docker/cp-test.txt"
E1206 08:54:14.909321    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/functional-118298/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 cp ha-512692:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1825477375/001/cp-test_ha-512692.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 ssh -n ha-512692 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 cp ha-512692:/home/docker/cp-test.txt ha-512692-m02:/home/docker/cp-test_ha-512692_ha-512692-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 ssh -n ha-512692 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 ssh -n ha-512692-m02 "sudo cat /home/docker/cp-test_ha-512692_ha-512692-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 cp ha-512692:/home/docker/cp-test.txt ha-512692-m03:/home/docker/cp-test_ha-512692_ha-512692-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 ssh -n ha-512692 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 ssh -n ha-512692-m03 "sudo cat /home/docker/cp-test_ha-512692_ha-512692-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 cp ha-512692:/home/docker/cp-test.txt ha-512692-m04:/home/docker/cp-test_ha-512692_ha-512692-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 ssh -n ha-512692 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 ssh -n ha-512692-m04 "sudo cat /home/docker/cp-test_ha-512692_ha-512692-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 cp testdata/cp-test.txt ha-512692-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 ssh -n ha-512692-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 cp ha-512692-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1825477375/001/cp-test_ha-512692-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 ssh -n ha-512692-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 cp ha-512692-m02:/home/docker/cp-test.txt ha-512692:/home/docker/cp-test_ha-512692-m02_ha-512692.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 ssh -n ha-512692-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 ssh -n ha-512692 "sudo cat /home/docker/cp-test_ha-512692-m02_ha-512692.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 cp ha-512692-m02:/home/docker/cp-test.txt ha-512692-m03:/home/docker/cp-test_ha-512692-m02_ha-512692-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 ssh -n ha-512692-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 ssh -n ha-512692-m03 "sudo cat /home/docker/cp-test_ha-512692-m02_ha-512692-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 cp ha-512692-m02:/home/docker/cp-test.txt ha-512692-m04:/home/docker/cp-test_ha-512692-m02_ha-512692-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 ssh -n ha-512692-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 ssh -n ha-512692-m04 "sudo cat /home/docker/cp-test_ha-512692-m02_ha-512692-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 cp testdata/cp-test.txt ha-512692-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 ssh -n ha-512692-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 cp ha-512692-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1825477375/001/cp-test_ha-512692-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 ssh -n ha-512692-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 cp ha-512692-m03:/home/docker/cp-test.txt ha-512692:/home/docker/cp-test_ha-512692-m03_ha-512692.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 ssh -n ha-512692-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 ssh -n ha-512692 "sudo cat /home/docker/cp-test_ha-512692-m03_ha-512692.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 cp ha-512692-m03:/home/docker/cp-test.txt ha-512692-m02:/home/docker/cp-test_ha-512692-m03_ha-512692-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 ssh -n ha-512692-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 ssh -n ha-512692-m02 "sudo cat /home/docker/cp-test_ha-512692-m03_ha-512692-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 cp ha-512692-m03:/home/docker/cp-test.txt ha-512692-m04:/home/docker/cp-test_ha-512692-m03_ha-512692-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 ssh -n ha-512692-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 ssh -n ha-512692-m04 "sudo cat /home/docker/cp-test_ha-512692-m03_ha-512692-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 cp testdata/cp-test.txt ha-512692-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 ssh -n ha-512692-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 cp ha-512692-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1825477375/001/cp-test_ha-512692-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 ssh -n ha-512692-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 cp ha-512692-m04:/home/docker/cp-test.txt ha-512692:/home/docker/cp-test_ha-512692-m04_ha-512692.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 ssh -n ha-512692-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 ssh -n ha-512692 "sudo cat /home/docker/cp-test_ha-512692-m04_ha-512692.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 cp ha-512692-m04:/home/docker/cp-test.txt ha-512692-m02:/home/docker/cp-test_ha-512692-m04_ha-512692-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 ssh -n ha-512692-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 ssh -n ha-512692-m02 "sudo cat /home/docker/cp-test_ha-512692-m04_ha-512692-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 cp ha-512692-m04:/home/docker/cp-test.txt ha-512692-m03:/home/docker/cp-test_ha-512692-m04_ha-512692-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 ssh -n ha-512692-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 ssh -n ha-512692-m03 "sudo cat /home/docker/cp-test_ha-512692-m04_ha-512692-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (10.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 node stop m02 --alsologtostderr -v 5
E1206 08:54:25.150970    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/functional-118298/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:54:45.633111    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/functional-118298/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:54:56.581587    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/functional-171063/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:55:24.284651    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/functional-171063/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:55:26.594971    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/functional-118298/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-512692 node stop m02 --alsologtostderr -v 5: (1m25.48408981s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-512692 status --alsologtostderr -v 5: exit status 7 (512.470555ms)

                                                
                                                
-- stdout --
	ha-512692
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-512692-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-512692-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-512692-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 08:55:50.127078   23313 out.go:360] Setting OutFile to fd 1 ...
	I1206 08:55:50.127422   23313 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:55:50.127436   23313 out.go:374] Setting ErrFile to fd 2...
	I1206 08:55:50.127443   23313 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 08:55:50.127796   23313 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5603/.minikube/bin
	I1206 08:55:50.128084   23313 out.go:368] Setting JSON to false
	I1206 08:55:50.128117   23313 mustload.go:66] Loading cluster: ha-512692
	I1206 08:55:50.128272   23313 notify.go:221] Checking for updates...
	I1206 08:55:50.128662   23313 config.go:182] Loaded profile config "ha-512692": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 08:55:50.128687   23313 status.go:174] checking status of ha-512692 ...
	I1206 08:55:50.131123   23313 status.go:371] ha-512692 host status = "Running" (err=<nil>)
	I1206 08:55:50.131140   23313 host.go:66] Checking if "ha-512692" exists ...
	I1206 08:55:50.133878   23313 main.go:143] libmachine: domain ha-512692 has defined MAC address 52:54:00:48:41:e4 in network mk-ha-512692
	I1206 08:55:50.134420   23313 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:48:41:e4", ip: ""} in network mk-ha-512692: {Iface:virbr1 ExpiryTime:2025-12-06 09:50:13 +0000 UTC Type:0 Mac:52:54:00:48:41:e4 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:ha-512692 Clientid:01:52:54:00:48:41:e4}
	I1206 08:55:50.134461   23313 main.go:143] libmachine: domain ha-512692 has defined IP address 192.168.39.207 and MAC address 52:54:00:48:41:e4 in network mk-ha-512692
	I1206 08:55:50.134637   23313 host.go:66] Checking if "ha-512692" exists ...
	I1206 08:55:50.134848   23313 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 08:55:50.137268   23313 main.go:143] libmachine: domain ha-512692 has defined MAC address 52:54:00:48:41:e4 in network mk-ha-512692
	I1206 08:55:50.137693   23313 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:48:41:e4", ip: ""} in network mk-ha-512692: {Iface:virbr1 ExpiryTime:2025-12-06 09:50:13 +0000 UTC Type:0 Mac:52:54:00:48:41:e4 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:ha-512692 Clientid:01:52:54:00:48:41:e4}
	I1206 08:55:50.137728   23313 main.go:143] libmachine: domain ha-512692 has defined IP address 192.168.39.207 and MAC address 52:54:00:48:41:e4 in network mk-ha-512692
	I1206 08:55:50.137916   23313 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/machines/ha-512692/id_rsa Username:docker}
	I1206 08:55:50.231040   23313 ssh_runner.go:195] Run: systemctl --version
	I1206 08:55:50.239387   23313 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 08:55:50.259798   23313 kubeconfig.go:125] found "ha-512692" server: "https://192.168.39.254:8443"
	I1206 08:55:50.259835   23313 api_server.go:166] Checking apiserver status ...
	I1206 08:55:50.259886   23313 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:50.279979   23313 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1459/cgroup
	W1206 08:55:50.292073   23313 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1459/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1206 08:55:50.292130   23313 ssh_runner.go:195] Run: ls
	I1206 08:55:50.297866   23313 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1206 08:55:50.302650   23313 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1206 08:55:50.302674   23313 status.go:463] ha-512692 apiserver status = Running (err=<nil>)
	I1206 08:55:50.302685   23313 status.go:176] ha-512692 status: &{Name:ha-512692 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1206 08:55:50.302723   23313 status.go:174] checking status of ha-512692-m02 ...
	I1206 08:55:50.304391   23313 status.go:371] ha-512692-m02 host status = "Stopped" (err=<nil>)
	I1206 08:55:50.304404   23313 status.go:384] host is not running, skipping remaining checks
	I1206 08:55:50.304409   23313 status.go:176] ha-512692-m02 status: &{Name:ha-512692-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1206 08:55:50.304419   23313 status.go:174] checking status of ha-512692-m03 ...
	I1206 08:55:50.305585   23313 status.go:371] ha-512692-m03 host status = "Running" (err=<nil>)
	I1206 08:55:50.305602   23313 host.go:66] Checking if "ha-512692-m03" exists ...
	I1206 08:55:50.308134   23313 main.go:143] libmachine: domain ha-512692-m03 has defined MAC address 52:54:00:03:b0:4e in network mk-ha-512692
	I1206 08:55:50.308541   23313 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:03:b0:4e", ip: ""} in network mk-ha-512692: {Iface:virbr1 ExpiryTime:2025-12-06 09:52:14 +0000 UTC Type:0 Mac:52:54:00:03:b0:4e Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-512692-m03 Clientid:01:52:54:00:03:b0:4e}
	I1206 08:55:50.308566   23313 main.go:143] libmachine: domain ha-512692-m03 has defined IP address 192.168.39.69 and MAC address 52:54:00:03:b0:4e in network mk-ha-512692
	I1206 08:55:50.308696   23313 host.go:66] Checking if "ha-512692-m03" exists ...
	I1206 08:55:50.308876   23313 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 08:55:50.310646   23313 main.go:143] libmachine: domain ha-512692-m03 has defined MAC address 52:54:00:03:b0:4e in network mk-ha-512692
	I1206 08:55:50.311006   23313 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:03:b0:4e", ip: ""} in network mk-ha-512692: {Iface:virbr1 ExpiryTime:2025-12-06 09:52:14 +0000 UTC Type:0 Mac:52:54:00:03:b0:4e Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:ha-512692-m03 Clientid:01:52:54:00:03:b0:4e}
	I1206 08:55:50.311025   23313 main.go:143] libmachine: domain ha-512692-m03 has defined IP address 192.168.39.69 and MAC address 52:54:00:03:b0:4e in network mk-ha-512692
	I1206 08:55:50.311140   23313 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/machines/ha-512692-m03/id_rsa Username:docker}
	I1206 08:55:50.395055   23313 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 08:55:50.414234   23313 kubeconfig.go:125] found "ha-512692" server: "https://192.168.39.254:8443"
	I1206 08:55:50.414257   23313 api_server.go:166] Checking apiserver status ...
	I1206 08:55:50.414294   23313 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 08:55:50.437004   23313 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1801/cgroup
	W1206 08:55:50.450780   23313 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1801/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1206 08:55:50.450825   23313 ssh_runner.go:195] Run: ls
	I1206 08:55:50.456902   23313 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1206 08:55:50.462240   23313 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1206 08:55:50.462269   23313 status.go:463] ha-512692-m03 apiserver status = Running (err=<nil>)
	I1206 08:55:50.462281   23313 status.go:176] ha-512692-m03 status: &{Name:ha-512692-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1206 08:55:50.462315   23313 status.go:174] checking status of ha-512692-m04 ...
	I1206 08:55:50.464236   23313 status.go:371] ha-512692-m04 host status = "Running" (err=<nil>)
	I1206 08:55:50.464257   23313 host.go:66] Checking if "ha-512692-m04" exists ...
	I1206 08:55:50.467152   23313 main.go:143] libmachine: domain ha-512692-m04 has defined MAC address 52:54:00:21:31:c4 in network mk-ha-512692
	I1206 08:55:50.467551   23313 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:21:31:c4", ip: ""} in network mk-ha-512692: {Iface:virbr1 ExpiryTime:2025-12-06 09:53:45 +0000 UTC Type:0 Mac:52:54:00:21:31:c4 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-512692-m04 Clientid:01:52:54:00:21:31:c4}
	I1206 08:55:50.467578   23313 main.go:143] libmachine: domain ha-512692-m04 has defined IP address 192.168.39.108 and MAC address 52:54:00:21:31:c4 in network mk-ha-512692
	I1206 08:55:50.467721   23313 host.go:66] Checking if "ha-512692-m04" exists ...
	I1206 08:55:50.467964   23313 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 08:55:50.469981   23313 main.go:143] libmachine: domain ha-512692-m04 has defined MAC address 52:54:00:21:31:c4 in network mk-ha-512692
	I1206 08:55:50.470360   23313 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:21:31:c4", ip: ""} in network mk-ha-512692: {Iface:virbr1 ExpiryTime:2025-12-06 09:53:45 +0000 UTC Type:0 Mac:52:54:00:21:31:c4 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-512692-m04 Clientid:01:52:54:00:21:31:c4}
	I1206 08:55:50.470383   23313 main.go:143] libmachine: domain ha-512692-m04 has defined IP address 192.168.39.108 and MAC address 52:54:00:21:31:c4 in network mk-ha-512692
	I1206 08:55:50.470544   23313 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/machines/ha-512692-m04/id_rsa Username:docker}
	I1206 08:55:50.556919   23313 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 08:55:50.576715   23313 status.go:176] ha-512692-m04 status: &{Name:ha-512692-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (86.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (37.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 node start m02 --alsologtostderr -v 5
E1206 08:56:25.409675    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-512692 node start m02 --alsologtostderr -v 5: (36.564170712s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (37.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (374s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 stop --alsologtostderr -v 5
E1206 08:56:48.516572    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/functional-118298/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:59:04.663564    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/functional-118298/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:59:32.358920    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/functional-118298/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 08:59:56.581848    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/functional-171063/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-512692 stop --alsologtostderr -v 5: (4m9.735418556s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 start --wait true --alsologtostderr -v 5
E1206 09:01:25.409232    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-512692 start --wait true --alsologtostderr -v 5: (2m4.099044357s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (374.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-512692 node delete m03 --alsologtostderr -v 5: (18.04986225s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (258.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 stop --alsologtostderr -v 5
E1206 09:04:04.658171    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/functional-118298/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:04:56.582445    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/functional-171063/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:06:19.646192    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/functional-171063/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:06:25.409262    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-512692 stop --alsologtostderr -v 5: (4m18.216572188s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-512692 status --alsologtostderr -v 5: exit status 7 (60.179477ms)

                                                
                                                
-- stdout --
	ha-512692
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-512692-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-512692-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 09:07:21.007111   26529 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:07:21.007352   26529 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:07:21.007361   26529 out.go:374] Setting ErrFile to fd 2...
	I1206 09:07:21.007364   26529 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:07:21.007581   26529 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5603/.minikube/bin
	I1206 09:07:21.007737   26529 out.go:368] Setting JSON to false
	I1206 09:07:21.007763   26529 mustload.go:66] Loading cluster: ha-512692
	I1206 09:07:21.007895   26529 notify.go:221] Checking for updates...
	I1206 09:07:21.008122   26529 config.go:182] Loaded profile config "ha-512692": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:07:21.008136   26529 status.go:174] checking status of ha-512692 ...
	I1206 09:07:21.010153   26529 status.go:371] ha-512692 host status = "Stopped" (err=<nil>)
	I1206 09:07:21.010171   26529 status.go:384] host is not running, skipping remaining checks
	I1206 09:07:21.010178   26529 status.go:176] ha-512692 status: &{Name:ha-512692 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1206 09:07:21.010201   26529 status.go:174] checking status of ha-512692-m02 ...
	I1206 09:07:21.011533   26529 status.go:371] ha-512692-m02 host status = "Stopped" (err=<nil>)
	I1206 09:07:21.011548   26529 status.go:384] host is not running, skipping remaining checks
	I1206 09:07:21.011554   26529 status.go:176] ha-512692-m02 status: &{Name:ha-512692-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1206 09:07:21.011571   26529 status.go:174] checking status of ha-512692-m04 ...
	I1206 09:07:21.012837   26529 status.go:371] ha-512692-m04 host status = "Stopped" (err=<nil>)
	I1206 09:07:21.012853   26529 status.go:384] host is not running, skipping remaining checks
	I1206 09:07:21.012859   26529 status.go:176] ha-512692-m04 status: &{Name:ha-512692-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (258.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (95.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-512692 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (1m34.665408723s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (95.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (84.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 node add --control-plane --alsologtostderr -v 5
E1206 09:09:04.656766    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/functional-118298/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:09:28.478370    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:09:56.581680    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/functional-171063/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-512692 node add --control-plane --alsologtostderr -v 5: (1m24.285963156s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-512692 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (84.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.72s)

                                                
                                    
x
+
TestJSONOutput/start/Command (87.64s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-561896 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
E1206 09:10:27.720954    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/functional-118298/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:11:25.409575    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-561896 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m27.638848941s)
--- PASS: TestJSONOutput/start/Command (87.64s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.72s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-561896 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.72s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-561896 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.95s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-561896 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-561896 --output=json --user=testUser: (6.948998847s)
--- PASS: TestJSONOutput/stop/Command (6.95s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-890238 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-890238 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (75.990839ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0ddb5496-e96d-403c-ba4f-e5bc0d940731","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-890238] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7dcec9db-4dcf-453d-8e66-f57b0a57dc27","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22049"}}
	{"specversion":"1.0","id":"017ed46f-91cc-4846-afbe-95d63b95a564","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"fa914a55-4df4-4156-bf0e-56d9d8a8169e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22049-5603/kubeconfig"}}
	{"specversion":"1.0","id":"0b2ba35f-1447-4ee1-9191-a5ccefca0ee7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-5603/.minikube"}}
	{"specversion":"1.0","id":"b5ff069b-2685-4137-a1ba-d490e2046e26","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"770319b6-de8a-41da-8b62-23f9d0fb2eba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"9bf15c21-778c-4916-99ed-9c76eeafb75d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-890238" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-890238
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (81.17s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-256471 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-256471 --driver=kvm2  --container-runtime=crio: (37.485889576s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-259056 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-259056 --driver=kvm2  --container-runtime=crio: (41.129984195s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-256471
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-259056
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-259056" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-259056
helpers_test.go:175: Cleaning up "first-256471" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-256471
--- PASS: TestMinikubeProfile (81.17s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (20s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-048249 --memory=3072 --mount-string /tmp/TestMountStartserial3591664406/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-048249 --memory=3072 --mount-string /tmp/TestMountStartserial3591664406/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (18.995578509s)
--- PASS: TestMountStart/serial/StartWithMountFirst (20.00s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-048249 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-048249 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (22.33s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-064412 --memory=3072 --mount-string /tmp/TestMountStartserial3591664406/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E1206 09:14:04.657379    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/functional-118298/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-064412 --memory=3072 --mount-string /tmp/TestMountStartserial3591664406/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (21.328139744s)
--- PASS: TestMountStart/serial/StartWithMountSecond (22.33s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-064412 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-064412 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.31s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.71s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-048249 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-064412 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-064412 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-064412
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-064412: (1.250606904s)
--- PASS: TestMountStart/serial/Stop (1.25s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (21.22s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-064412
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-064412: (20.216122044s)
--- PASS: TestMountStart/serial/RestartStopped (21.22s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-064412 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-064412 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (105.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-240535 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1206 09:14:56.582144    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/functional-171063/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-240535 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m44.933560892s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-240535 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (105.28s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-240535 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-240535 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-240535 -- rollout status deployment/busybox: (4.943397707s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-240535 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-240535 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-240535 -- exec busybox-7b57f96db7-4k9s7 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-240535 -- exec busybox-7b57f96db7-4qpgv -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-240535 -- exec busybox-7b57f96db7-4k9s7 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-240535 -- exec busybox-7b57f96db7-4qpgv -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-240535 -- exec busybox-7b57f96db7-4k9s7 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-240535 -- exec busybox-7b57f96db7-4qpgv -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.50s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-240535 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-240535 -- exec busybox-7b57f96db7-4k9s7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-240535 -- exec busybox-7b57f96db7-4k9s7 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-240535 -- exec busybox-7b57f96db7-4qpgv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-240535 -- exec busybox-7b57f96db7-4qpgv -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.84s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (43.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-240535 -v=5 --alsologtostderr
E1206 09:16:25.408767    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-240535 -v=5 --alsologtostderr: (43.319779523s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-240535 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (43.78s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-240535 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.47s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-240535 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-240535 cp testdata/cp-test.txt multinode-240535:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-240535 ssh -n multinode-240535 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-240535 cp multinode-240535:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2187259314/001/cp-test_multinode-240535.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-240535 ssh -n multinode-240535 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-240535 cp multinode-240535:/home/docker/cp-test.txt multinode-240535-m02:/home/docker/cp-test_multinode-240535_multinode-240535-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-240535 ssh -n multinode-240535 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-240535 ssh -n multinode-240535-m02 "sudo cat /home/docker/cp-test_multinode-240535_multinode-240535-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-240535 cp multinode-240535:/home/docker/cp-test.txt multinode-240535-m03:/home/docker/cp-test_multinode-240535_multinode-240535-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-240535 ssh -n multinode-240535 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-240535 ssh -n multinode-240535-m03 "sudo cat /home/docker/cp-test_multinode-240535_multinode-240535-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-240535 cp testdata/cp-test.txt multinode-240535-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-240535 ssh -n multinode-240535-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-240535 cp multinode-240535-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2187259314/001/cp-test_multinode-240535-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-240535 ssh -n multinode-240535-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-240535 cp multinode-240535-m02:/home/docker/cp-test.txt multinode-240535:/home/docker/cp-test_multinode-240535-m02_multinode-240535.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-240535 ssh -n multinode-240535-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-240535 ssh -n multinode-240535 "sudo cat /home/docker/cp-test_multinode-240535-m02_multinode-240535.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-240535 cp multinode-240535-m02:/home/docker/cp-test.txt multinode-240535-m03:/home/docker/cp-test_multinode-240535-m02_multinode-240535-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-240535 ssh -n multinode-240535-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-240535 ssh -n multinode-240535-m03 "sudo cat /home/docker/cp-test_multinode-240535-m02_multinode-240535-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-240535 cp testdata/cp-test.txt multinode-240535-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-240535 ssh -n multinode-240535-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-240535 cp multinode-240535-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2187259314/001/cp-test_multinode-240535-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-240535 ssh -n multinode-240535-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-240535 cp multinode-240535-m03:/home/docker/cp-test.txt multinode-240535:/home/docker/cp-test_multinode-240535-m03_multinode-240535.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-240535 ssh -n multinode-240535-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-240535 ssh -n multinode-240535 "sudo cat /home/docker/cp-test_multinode-240535-m03_multinode-240535.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-240535 cp multinode-240535-m03:/home/docker/cp-test.txt multinode-240535-m02:/home/docker/cp-test_multinode-240535-m03_multinode-240535-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-240535 ssh -n multinode-240535-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-240535 ssh -n multinode-240535-m02 "sudo cat /home/docker/cp-test_multinode-240535-m03_multinode-240535-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.04s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-240535 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-240535 node stop m03: (1.728374031s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-240535 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-240535 status: exit status 7 (340.923506ms)

                                                
                                                
-- stdout --
	multinode-240535
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-240535-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-240535-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-240535 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-240535 status --alsologtostderr: exit status 7 (352.571135ms)

                                                
                                                
-- stdout --
	multinode-240535
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-240535-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-240535-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 09:17:16.135849   32112 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:17:16.136121   32112 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:17:16.136132   32112 out.go:374] Setting ErrFile to fd 2...
	I1206 09:17:16.136136   32112 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:17:16.136347   32112 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5603/.minikube/bin
	I1206 09:17:16.136532   32112 out.go:368] Setting JSON to false
	I1206 09:17:16.136554   32112 mustload.go:66] Loading cluster: multinode-240535
	I1206 09:17:16.136673   32112 notify.go:221] Checking for updates...
	I1206 09:17:16.136906   32112 config.go:182] Loaded profile config "multinode-240535": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:17:16.136920   32112 status.go:174] checking status of multinode-240535 ...
	I1206 09:17:16.139658   32112 status.go:371] multinode-240535 host status = "Running" (err=<nil>)
	I1206 09:17:16.139721   32112 host.go:66] Checking if "multinode-240535" exists ...
	I1206 09:17:16.142766   32112 main.go:143] libmachine: domain multinode-240535 has defined MAC address 52:54:00:26:23:b9 in network mk-multinode-240535
	I1206 09:17:16.143284   32112 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:26:23:b9", ip: ""} in network mk-multinode-240535: {Iface:virbr1 ExpiryTime:2025-12-06 10:14:46 +0000 UTC Type:0 Mac:52:54:00:26:23:b9 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:multinode-240535 Clientid:01:52:54:00:26:23:b9}
	I1206 09:17:16.143324   32112 main.go:143] libmachine: domain multinode-240535 has defined IP address 192.168.39.188 and MAC address 52:54:00:26:23:b9 in network mk-multinode-240535
	I1206 09:17:16.143482   32112 host.go:66] Checking if "multinode-240535" exists ...
	I1206 09:17:16.143699   32112 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 09:17:16.145750   32112 main.go:143] libmachine: domain multinode-240535 has defined MAC address 52:54:00:26:23:b9 in network mk-multinode-240535
	I1206 09:17:16.146104   32112 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:26:23:b9", ip: ""} in network mk-multinode-240535: {Iface:virbr1 ExpiryTime:2025-12-06 10:14:46 +0000 UTC Type:0 Mac:52:54:00:26:23:b9 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:multinode-240535 Clientid:01:52:54:00:26:23:b9}
	I1206 09:17:16.146137   32112 main.go:143] libmachine: domain multinode-240535 has defined IP address 192.168.39.188 and MAC address 52:54:00:26:23:b9 in network mk-multinode-240535
	I1206 09:17:16.146325   32112 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/machines/multinode-240535/id_rsa Username:docker}
	I1206 09:17:16.236326   32112 ssh_runner.go:195] Run: systemctl --version
	I1206 09:17:16.243903   32112 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:17:16.266243   32112 kubeconfig.go:125] found "multinode-240535" server: "https://192.168.39.188:8443"
	I1206 09:17:16.266284   32112 api_server.go:166] Checking apiserver status ...
	I1206 09:17:16.266329   32112 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:17:16.290381   32112 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1411/cgroup
	W1206 09:17:16.305198   32112 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1411/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1206 09:17:16.305262   32112 ssh_runner.go:195] Run: ls
	I1206 09:17:16.311486   32112 api_server.go:253] Checking apiserver healthz at https://192.168.39.188:8443/healthz ...
	I1206 09:17:16.316454   32112 api_server.go:279] https://192.168.39.188:8443/healthz returned 200:
	ok
	I1206 09:17:16.316508   32112 status.go:463] multinode-240535 apiserver status = Running (err=<nil>)
	I1206 09:17:16.316522   32112 status.go:176] multinode-240535 status: &{Name:multinode-240535 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1206 09:17:16.316557   32112 status.go:174] checking status of multinode-240535-m02 ...
	I1206 09:17:16.318174   32112 status.go:371] multinode-240535-m02 host status = "Running" (err=<nil>)
	I1206 09:17:16.318192   32112 host.go:66] Checking if "multinode-240535-m02" exists ...
	I1206 09:17:16.320734   32112 main.go:143] libmachine: domain multinode-240535-m02 has defined MAC address 52:54:00:92:38:27 in network mk-multinode-240535
	I1206 09:17:16.321148   32112 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:92:38:27", ip: ""} in network mk-multinode-240535: {Iface:virbr1 ExpiryTime:2025-12-06 10:15:44 +0000 UTC Type:0 Mac:52:54:00:92:38:27 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:multinode-240535-m02 Clientid:01:52:54:00:92:38:27}
	I1206 09:17:16.321176   32112 main.go:143] libmachine: domain multinode-240535-m02 has defined IP address 192.168.39.189 and MAC address 52:54:00:92:38:27 in network mk-multinode-240535
	I1206 09:17:16.321304   32112 host.go:66] Checking if "multinode-240535-m02" exists ...
	I1206 09:17:16.321505   32112 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 09:17:16.323670   32112 main.go:143] libmachine: domain multinode-240535-m02 has defined MAC address 52:54:00:92:38:27 in network mk-multinode-240535
	I1206 09:17:16.324004   32112 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:92:38:27", ip: ""} in network mk-multinode-240535: {Iface:virbr1 ExpiryTime:2025-12-06 10:15:44 +0000 UTC Type:0 Mac:52:54:00:92:38:27 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:multinode-240535-m02 Clientid:01:52:54:00:92:38:27}
	I1206 09:17:16.324021   32112 main.go:143] libmachine: domain multinode-240535-m02 has defined IP address 192.168.39.189 and MAC address 52:54:00:92:38:27 in network mk-multinode-240535
	I1206 09:17:16.324146   32112 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22049-5603/.minikube/machines/multinode-240535-m02/id_rsa Username:docker}
	I1206 09:17:16.408081   32112 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:17:16.425958   32112 status.go:176] multinode-240535-m02 status: &{Name:multinode-240535-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1206 09:17:16.425993   32112 status.go:174] checking status of multinode-240535-m03 ...
	I1206 09:17:16.427658   32112 status.go:371] multinode-240535-m03 host status = "Stopped" (err=<nil>)
	I1206 09:17:16.427682   32112 status.go:384] host is not running, skipping remaining checks
	I1206 09:17:16.427689   32112 status.go:176] multinode-240535-m03 status: &{Name:multinode-240535-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.42s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (40.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-240535 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-240535 node start m03 -v=5 --alsologtostderr: (40.283369396s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-240535 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (40.79s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (301.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-240535
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-240535
E1206 09:19:04.663195    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/functional-118298/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:19:56.581836    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/functional-171063/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-240535: (2m49.514686045s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-240535 --wait=true -v=5 --alsologtostderr
E1206 09:21:25.408853    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-240535 --wait=true -v=5 --alsologtostderr: (2m12.005274779s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-240535
--- PASS: TestMultiNode/serial/RestartKeepsNodes (301.64s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-240535 node delete m03
E1206 09:22:59.647872    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/functional-171063/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-240535 node delete m03: (2.087810441s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-240535 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.54s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (174.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-240535 stop
E1206 09:24:04.656421    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/functional-118298/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:24:56.581836    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/functional-171063/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-240535 stop: (2m54.206995198s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-240535 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-240535 status: exit status 7 (63.652457ms)

                                                
                                                
-- stdout --
	multinode-240535
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-240535-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-240535 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-240535 status --alsologtostderr: exit status 7 (60.657956ms)

                                                
                                                
-- stdout --
	multinode-240535
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-240535-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 09:25:55.717441   34929 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:25:55.717537   34929 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:25:55.717542   34929 out.go:374] Setting ErrFile to fd 2...
	I1206 09:25:55.717546   34929 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:25:55.717760   34929 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5603/.minikube/bin
	I1206 09:25:55.717918   34929 out.go:368] Setting JSON to false
	I1206 09:25:55.717942   34929 mustload.go:66] Loading cluster: multinode-240535
	I1206 09:25:55.718047   34929 notify.go:221] Checking for updates...
	I1206 09:25:55.718271   34929 config.go:182] Loaded profile config "multinode-240535": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:25:55.718283   34929 status.go:174] checking status of multinode-240535 ...
	I1206 09:25:55.720306   34929 status.go:371] multinode-240535 host status = "Stopped" (err=<nil>)
	I1206 09:25:55.720321   34929 status.go:384] host is not running, skipping remaining checks
	I1206 09:25:55.720326   34929 status.go:176] multinode-240535 status: &{Name:multinode-240535 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1206 09:25:55.720340   34929 status.go:174] checking status of multinode-240535-m02 ...
	I1206 09:25:55.721515   34929 status.go:371] multinode-240535-m02 host status = "Stopped" (err=<nil>)
	I1206 09:25:55.721531   34929 status.go:384] host is not running, skipping remaining checks
	I1206 09:25:55.721536   34929 status.go:176] multinode-240535-m02 status: &{Name:multinode-240535-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (174.33s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (89.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-240535 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1206 09:26:08.481363    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:26:25.409040    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:27:07.723068    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/functional-118298/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-240535 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m29.558906123s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-240535 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (89.99s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (39.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-240535
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-240535-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-240535-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (71.664572ms)

                                                
                                                
-- stdout --
	* [multinode-240535-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22049
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22049-5603/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-5603/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-240535-m02' is duplicated with machine name 'multinode-240535-m02' in profile 'multinode-240535'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-240535-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-240535-m03 --driver=kvm2  --container-runtime=crio: (38.3515654s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-240535
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-240535: exit status 80 (196.322814ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-240535 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-240535-m03 already exists in multinode-240535-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-240535-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (39.50s)

                                                
                                    
x
+
TestScheduledStopUnix (109.51s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-065951 --memory=3072 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-065951 --memory=3072 --driver=kvm2  --container-runtime=crio: (37.963457436s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-065951 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1206 09:31:24.607424   37278 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:31:24.607540   37278 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:31:24.607548   37278 out.go:374] Setting ErrFile to fd 2...
	I1206 09:31:24.607555   37278 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:31:24.607762   37278 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5603/.minikube/bin
	I1206 09:31:24.607983   37278 out.go:368] Setting JSON to false
	I1206 09:31:24.608089   37278 mustload.go:66] Loading cluster: scheduled-stop-065951
	I1206 09:31:24.608392   37278 config.go:182] Loaded profile config "scheduled-stop-065951": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:31:24.608489   37278 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/scheduled-stop-065951/config.json ...
	I1206 09:31:24.608668   37278 mustload.go:66] Loading cluster: scheduled-stop-065951
	I1206 09:31:24.608796   37278 config.go:182] Loaded profile config "scheduled-stop-065951": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-065951 -n scheduled-stop-065951
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-065951 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1206 09:31:24.883035   37322 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:31:24.883126   37322 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:31:24.883138   37322 out.go:374] Setting ErrFile to fd 2...
	I1206 09:31:24.883145   37322 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:31:24.883322   37322 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5603/.minikube/bin
	I1206 09:31:24.883540   37322 out.go:368] Setting JSON to false
	I1206 09:31:24.883709   37322 daemonize_unix.go:73] killing process 37312 as it is an old scheduled stop
	I1206 09:31:24.883806   37322 mustload.go:66] Loading cluster: scheduled-stop-065951
	I1206 09:31:24.884277   37322 config.go:182] Loaded profile config "scheduled-stop-065951": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:31:24.884366   37322 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/scheduled-stop-065951/config.json ...
	I1206 09:31:24.884606   37322 mustload.go:66] Loading cluster: scheduled-stop-065951
	I1206 09:31:24.884751   37322 config.go:182] Loaded profile config "scheduled-stop-065951": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1206 09:31:24.890382    9552 retry.go:31] will retry after 100.193µs: open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/scheduled-stop-065951/pid: no such file or directory
I1206 09:31:24.891551    9552 retry.go:31] will retry after 192.731µs: open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/scheduled-stop-065951/pid: no such file or directory
I1206 09:31:24.892648    9552 retry.go:31] will retry after 258.212µs: open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/scheduled-stop-065951/pid: no such file or directory
I1206 09:31:24.893765    9552 retry.go:31] will retry after 448.082µs: open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/scheduled-stop-065951/pid: no such file or directory
I1206 09:31:24.894889    9552 retry.go:31] will retry after 550.9µs: open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/scheduled-stop-065951/pid: no such file or directory
I1206 09:31:24.896007    9552 retry.go:31] will retry after 1.025995ms: open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/scheduled-stop-065951/pid: no such file or directory
I1206 09:31:24.897137    9552 retry.go:31] will retry after 1.535394ms: open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/scheduled-stop-065951/pid: no such file or directory
I1206 09:31:24.899315    9552 retry.go:31] will retry after 1.67193ms: open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/scheduled-stop-065951/pid: no such file or directory
I1206 09:31:24.901513    9552 retry.go:31] will retry after 1.887404ms: open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/scheduled-stop-065951/pid: no such file or directory
I1206 09:31:24.903699    9552 retry.go:31] will retry after 4.090943ms: open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/scheduled-stop-065951/pid: no such file or directory
I1206 09:31:24.909047    9552 retry.go:31] will retry after 3.159859ms: open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/scheduled-stop-065951/pid: no such file or directory
I1206 09:31:24.913263    9552 retry.go:31] will retry after 9.189991ms: open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/scheduled-stop-065951/pid: no such file or directory
I1206 09:31:24.923480    9552 retry.go:31] will retry after 7.602573ms: open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/scheduled-stop-065951/pid: no such file or directory
I1206 09:31:24.931778    9552 retry.go:31] will retry after 18.046527ms: open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/scheduled-stop-065951/pid: no such file or directory
I1206 09:31:24.949975    9552 retry.go:31] will retry after 18.258565ms: open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/scheduled-stop-065951/pid: no such file or directory
I1206 09:31:24.969223    9552 retry.go:31] will retry after 35.96649ms: open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/scheduled-stop-065951/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-065951 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
E1206 09:31:25.409053    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-065951 -n scheduled-stop-065951
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-065951
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-065951 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1206 09:31:50.563307   37487 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:31:50.563650   37487 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:31:50.563665   37487 out.go:374] Setting ErrFile to fd 2...
	I1206 09:31:50.563674   37487 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:31:50.563959   37487 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5603/.minikube/bin
	I1206 09:31:50.564295   37487 out.go:368] Setting JSON to false
	I1206 09:31:50.564416   37487 mustload.go:66] Loading cluster: scheduled-stop-065951
	I1206 09:31:50.564952   37487 config.go:182] Loaded profile config "scheduled-stop-065951": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:31:50.565051   37487 profile.go:143] Saving config to /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/scheduled-stop-065951/config.json ...
	I1206 09:31:50.565324   37487 mustload.go:66] Loading cluster: scheduled-stop-065951
	I1206 09:31:50.565494   37487 config.go:182] Loaded profile config "scheduled-stop-065951": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-065951
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-065951: exit status 7 (57.096846ms)

                                                
                                                
-- stdout --
	scheduled-stop-065951
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-065951 -n scheduled-stop-065951
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-065951 -n scheduled-stop-065951: exit status 7 (55.538031ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-065951" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-065951
--- PASS: TestScheduledStopUnix (109.51s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (148.2s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.1649368254 start -p running-upgrade-044478 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.1649368254 start -p running-upgrade-044478 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m36.672232484s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-044478 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-044478 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (46.858470458s)
helpers_test.go:175: Cleaning up "running-upgrade-044478" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-044478
--- PASS: TestRunningBinaryUpgrade (148.20s)

                                                
                                    
x
+
TestKubernetesUpgrade (195.96s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-460997 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-460997 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (56.252319173s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-460997
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-460997: (1.916312099s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-460997 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-460997 status --format={{.Host}}: exit status 7 (63.896333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-460997 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1206 09:36:25.409531    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-460997 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m30.580137624s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-460997 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-460997 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-460997 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 106 (75.287627ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-460997] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22049
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22049-5603/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-5603/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0-beta.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-460997
	    minikube start -p kubernetes-upgrade-460997 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4609972 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-460997 --kubernetes-version=v1.35.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-460997 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-460997 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (46.055397979s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-460997" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-460997
--- PASS: TestKubernetesUpgrade (195.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-030154 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-030154 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 14 (89.660256ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-030154] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22049
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22049-5603/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-5603/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestPause/serial/Start (102.69s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-272844 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-272844 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m42.685599538s)
--- PASS: TestPause/serial/Start (102.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (82.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-030154 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-030154 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m22.129788936s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-030154 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (82.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (29.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-030154 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E1206 09:34:04.656134    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/functional-118298/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-030154 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (28.327366328s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-030154 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-030154 status -o json: exit status 2 (206.189565ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-030154","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-030154
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (29.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (31.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-030154 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-030154 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (31.481831104s)
--- PASS: TestNoKubernetes/serial/Start (31.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-920584 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-920584 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (117.726556ms)

                                                
                                                
-- stdout --
	* [false-920584] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22049
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22049-5603/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-5603/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 09:34:40.705654   40121 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:34:40.705890   40121 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:34:40.705898   40121 out.go:374] Setting ErrFile to fd 2...
	I1206 09:34:40.705902   40121 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:34:40.706133   40121 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22049-5603/.minikube/bin
	I1206 09:34:40.706639   40121 out.go:368] Setting JSON to false
	I1206 09:34:40.707457   40121 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":4623,"bootTime":1765009058,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:34:40.707520   40121 start.go:143] virtualization: kvm guest
	I1206 09:34:40.709495   40121 out.go:179] * [false-920584] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 09:34:40.710649   40121 notify.go:221] Checking for updates...
	I1206 09:34:40.710674   40121 out.go:179]   - MINIKUBE_LOCATION=22049
	I1206 09:34:40.711795   40121 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:34:40.712899   40121 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22049-5603/kubeconfig
	I1206 09:34:40.713938   40121 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22049-5603/.minikube
	I1206 09:34:40.714963   40121 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:34:40.716176   40121 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:34:40.717778   40121 config.go:182] Loaded profile config "NoKubernetes-030154": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1206 09:34:40.717899   40121 config.go:182] Loaded profile config "pause-272844": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:34:40.717979   40121 config.go:182] Loaded profile config "running-upgrade-044478": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1206 09:34:40.718077   40121 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:34:40.754038   40121 out.go:179] * Using the kvm2 driver based on user configuration
	I1206 09:34:40.755026   40121 start.go:309] selected driver: kvm2
	I1206 09:34:40.755045   40121 start.go:927] validating driver "kvm2" against <nil>
	I1206 09:34:40.755058   40121 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:34:40.756791   40121 out.go:203] 
	W1206 09:34:40.757788   40121 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1206 09:34:40.758710   40121 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-920584 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-920584

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-920584

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-920584

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-920584

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-920584

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-920584

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-920584

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-920584

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-920584

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-920584

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-920584"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-920584"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-920584"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-920584

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-920584"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-920584"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-920584" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-920584" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-920584" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-920584" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-920584" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-920584" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-920584" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-920584" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-920584"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-920584"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-920584"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-920584"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-920584"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-920584" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-920584" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-920584" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-920584"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-920584"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-920584"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-920584"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-920584"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22049-5603/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 06 Dec 2025 09:33:40 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.50.157:8443
name: pause-272844
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22049-5603/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 06 Dec 2025 09:34:34 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.39.82:8443
name: running-upgrade-044478
contexts:
- context:
cluster: pause-272844
extensions:
- extension:
last-update: Sat, 06 Dec 2025 09:33:40 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-272844
name: pause-272844
- context:
cluster: running-upgrade-044478
user: running-upgrade-044478
name: running-upgrade-044478
current-context: ""
kind: Config
users:
- name: pause-272844
user:
client-certificate: /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/pause-272844/client.crt
client-key: /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/pause-272844/client.key
- name: running-upgrade-044478
user:
client-certificate: /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/running-upgrade-044478/client.crt
client-key: /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/running-upgrade-044478/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-920584

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-920584"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-920584"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-920584"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-920584"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-920584"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-920584"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-920584"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-920584"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-920584"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-920584"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-920584"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-920584"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-920584"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-920584"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-920584"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-920584"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-920584"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-920584"

                                                
                                                
----------------------- debugLogs end: false-920584 [took: 3.103857515s] --------------------------------
helpers_test.go:175: Cleaning up "false-920584" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-920584
--- PASS: TestNetworkPlugins/group/false (3.37s)

                                                
                                    
x
+
TestISOImage/Setup (30.77s)

                                                
                                                
=== RUN   TestISOImage/Setup
iso_test.go:47: (dbg) Run:  out/minikube-linux-amd64 start -p guest-688206 --no-kubernetes --driver=kvm2  --container-runtime=crio
E1206 09:34:56.581708    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/functional-171063/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
iso_test.go:47: (dbg) Done: out/minikube-linux-amd64 start -p guest-688206 --no-kubernetes --driver=kvm2  --container-runtime=crio: (30.772163462s)
--- PASS: TestISOImage/Setup (30.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22049-5603/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-030154 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-030154 "sudo systemctl is-active --quiet service kubelet": exit status 1 (173.708272ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (15.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:194: (dbg) Done: out/minikube-linux-amd64 profile list: (14.847530432s)
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (15.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-030154
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-030154: (1.367247979s)
--- PASS: TestNoKubernetes/serial/Stop (1.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (43.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-030154 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-030154 --driver=kvm2  --container-runtime=crio: (43.342810015s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (43.34s)

                                                
                                    
x
+
TestISOImage/Binaries/crictl (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/crictl
=== PAUSE TestISOImage/Binaries/crictl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/crictl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-688206 ssh "which crictl"
--- PASS: TestISOImage/Binaries/crictl (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/curl (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/curl
=== PAUSE TestISOImage/Binaries/curl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/curl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-688206 ssh "which curl"
--- PASS: TestISOImage/Binaries/curl (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/docker (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/docker
=== PAUSE TestISOImage/Binaries/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/docker
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-688206 ssh "which docker"
--- PASS: TestISOImage/Binaries/docker (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/git (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/git
=== PAUSE TestISOImage/Binaries/git

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/git
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-688206 ssh "which git"
--- PASS: TestISOImage/Binaries/git (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/iptables (0.16s)

                                                
                                                
=== RUN   TestISOImage/Binaries/iptables
=== PAUSE TestISOImage/Binaries/iptables

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/iptables
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-688206 ssh "which iptables"
--- PASS: TestISOImage/Binaries/iptables (0.16s)

                                                
                                    
x
+
TestISOImage/Binaries/podman (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/podman
=== PAUSE TestISOImage/Binaries/podman

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/podman
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-688206 ssh "which podman"
--- PASS: TestISOImage/Binaries/podman (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/rsync (0.16s)

                                                
                                                
=== RUN   TestISOImage/Binaries/rsync
=== PAUSE TestISOImage/Binaries/rsync

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/rsync
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-688206 ssh "which rsync"
--- PASS: TestISOImage/Binaries/rsync (0.16s)

                                                
                                    
x
+
TestISOImage/Binaries/socat (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/socat
=== PAUSE TestISOImage/Binaries/socat

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/socat
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-688206 ssh "which socat"
E1206 09:46:25.408612    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestISOImage/Binaries/socat (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/wget (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/wget
=== PAUSE TestISOImage/Binaries/wget

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/wget
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-688206 ssh "which wget"
--- PASS: TestISOImage/Binaries/wget (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxControl (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxControl
=== PAUSE TestISOImage/Binaries/VBoxControl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxControl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-688206 ssh "which VBoxControl"
--- PASS: TestISOImage/Binaries/VBoxControl (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxService (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxService
=== PAUSE TestISOImage/Binaries/VBoxService

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxService
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-688206 ssh "which VBoxService"
--- PASS: TestISOImage/Binaries/VBoxService (0.17s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.16s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.16s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (149.57s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.969069311 start -p stopped-upgrade-295047 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.969069311 start -p stopped-upgrade-295047 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m15.028532815s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.969069311 -p stopped-upgrade-295047 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.969069311 -p stopped-upgrade-295047 stop: (2.036866888s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-295047 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-295047 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m12.500773073s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (149.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-030154 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-030154 "sudo systemctl is-active --quiet service kubelet": exit status 1 (174.851436ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.17s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.05s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-295047
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-295047: (1.049356486s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (104.6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-433324 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-433324 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (1m44.596167864s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (104.60s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (92.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-562926 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-562926 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2: (1m32.400571428s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (92.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (109.56s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-967242 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2
E1206 09:39:04.656389    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/functional-118298/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-967242 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2: (1m49.557232821s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (109.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-433324 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [55d5e90f-ec53-4c20-b4ec-2bcbe1853198] Pending
E1206 09:39:39.649175    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/functional-171063/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [55d5e90f-ec53-4c20-b4ec-2bcbe1853198] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [55d5e90f-ec53-4c20-b4ec-2bcbe1853198] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.004816919s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-433324 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-433324 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-433324 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (86.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-433324 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-433324 --alsologtostderr -v=3: (1m26.382744803s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (86.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-562926 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [d812d007-6230-46ad-85f1-72ae0af9d3f6] Pending
helpers_test.go:352: "busybox" [d812d007-6230-46ad-85f1-72ae0af9d3f6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1206 09:39:56.581524    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/functional-171063/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [d812d007-6230-46ad-85f1-72ae0af9d3f6] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.004737553s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-562926 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-562926 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-562926 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (87.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-562926 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-562926 --alsologtostderr -v=3: (1m27.968539346s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (87.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-967242 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [6a48b356-8bca-4ac0-aa5d-e464e815fbb0] Pending
helpers_test.go:352: "busybox" [6a48b356-8bca-4ac0-aa5d-e464e815fbb0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [6a48b356-8bca-4ac0-aa5d-e464e815fbb0] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004419932s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-967242 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-967242 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-967242 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (90.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-967242 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-967242 --alsologtostderr -v=3: (1m30.067632834s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (90.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (94.89s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-110097 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-110097 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (1m34.890509432s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (94.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-433324 -n old-k8s-version-433324
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-433324 -n old-k8s-version-433324: exit status 7 (59.986147ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-433324 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (46.63s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-433324 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
E1206 09:41:25.409066    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-433324 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (46.33700643s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-433324 -n old-k8s-version-433324
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (46.63s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-562926 -n default-k8s-diff-port-562926
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-562926 -n default-k8s-diff-port-562926: exit status 7 (58.587135ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-562926 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (45.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-562926 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-562926 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2: (45.677772763s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-562926 -n default-k8s-diff-port-562926
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (45.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-967242 -n embed-certs-967242
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-967242 -n embed-certs-967242: exit status 7 (71.24381ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-967242 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (44.83s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-967242 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-967242 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2: (44.516059555s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-967242 -n embed-certs-967242
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (44.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-h9p5c" [d6562a21-8c73-4ef2-91a8-b615550275de] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-h9p5c" [d6562a21-8c73-4ef2-91a8-b615550275de] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.005520686s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (12.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (12.53s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-110097 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [323bf77c-3977-414a-b77e-51fdc8559882] Pending
helpers_test.go:352: "busybox" [323bf77c-3977-414a-b77e-51fdc8559882] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [323bf77c-3977-414a-b77e-51fdc8559882] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 12.004883725s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-110097 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (12.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-h9p5c" [d6562a21-8c73-4ef2-91a8-b615550275de] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.014064257s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-433324 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (18.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-jlznq" [30acedff-8e17-44ae-89b5-cee04198f07f] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-jlznq" [30acedff-8e17-44ae-89b5-cee04198f07f] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 18.004183693s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (18.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-433324 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-433324 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-433324 -n old-k8s-version-433324
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-433324 -n old-k8s-version-433324: exit status 2 (230.029104ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-433324 -n old-k8s-version-433324
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-433324 -n old-k8s-version-433324: exit status 2 (230.067324ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-433324 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-433324 -n old-k8s-version-433324
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-433324 -n old-k8s-version-433324
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.94s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (43.64s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-481573 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-481573 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (43.637185885s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (43.64s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-110097 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-110097 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.170254095s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-110097 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (86.55s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-110097 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-110097 --alsologtostderr -v=3: (1m26.544893282s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (86.55s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-jlznq" [30acedff-8e17-44ae-89b5-cee04198f07f] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005012914s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-562926 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (11.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-w9vl2" [23ffb678-967d-4de5-be66-93cdb564d8f4] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-w9vl2" [23ffb678-967d-4de5-be66-93cdb564d8f4] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.003465344s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (11.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-562926 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.82s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-562926 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-562926 -n default-k8s-diff-port-562926
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-562926 -n default-k8s-diff-port-562926: exit status 2 (254.52698ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-562926 -n default-k8s-diff-port-562926
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-562926 -n default-k8s-diff-port-562926: exit status 2 (227.077688ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-562926 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-562926 -n default-k8s-diff-port-562926
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-562926 -n default-k8s-diff-port-562926
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (81.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-920584 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
E1206 09:42:48.483663    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/addons-618522/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-920584 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m21.688480112s)
--- PASS: TestNetworkPlugins/group/auto/Start (81.69s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-w9vl2" [23ffb678-967d-4de5-be66-93cdb564d8f4] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004755884s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-967242 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-967242 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-967242 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p embed-certs-967242 --alsologtostderr -v=1: (1.105625782s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-967242 -n embed-certs-967242
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-967242 -n embed-certs-967242: exit status 2 (226.065664ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-967242 -n embed-certs-967242
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-967242 -n embed-certs-967242: exit status 2 (226.647558ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-967242 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-967242 -n embed-certs-967242
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-967242 -n embed-certs-967242
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (67.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-920584 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-920584 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m7.352650122s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (67.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.95s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-481573 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-481573 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.954268201s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.95s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-481573 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-481573 --alsologtostderr -v=3: (7.234365575s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-481573 -n newest-cni-481573
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-481573 -n newest-cni-481573: exit status 7 (58.848312ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-481573 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (44.79s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-481573 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
E1206 09:43:47.725252    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/functional-118298/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-481573 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (44.409123843s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-481573 -n newest-cni-481573
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (44.79s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-110097 -n no-preload-110097
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-110097 -n no-preload-110097: exit status 7 (72.782665ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-110097 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (61.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-110097 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-110097 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (1m0.790149758s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-110097 -n no-preload-110097
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (61.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-481573 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.99s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-481573 --alsologtostderr -v=1
E1206 09:44:04.656162    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/functional-118298/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-481573 --alsologtostderr -v=1: (1.516006719s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-481573 -n newest-cni-481573
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-481573 -n newest-cni-481573: exit status 2 (342.287568ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-481573 -n newest-cni-481573
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-481573 -n newest-cni-481573: exit status 2 (312.873964ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-481573 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p newest-cni-481573 --alsologtostderr -v=1: (1.050827851s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-481573 -n newest-cni-481573
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-481573 -n newest-cni-481573
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-gthd5" [aed37db8-2f3e-48d7-91c1-3d580193969e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00446326s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-920584 "pgrep -a kubelet"
I1206 09:44:08.428011    9552 config.go:182] Loaded profile config "auto-920584": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-920584 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-2z7m2" [ac2b4f6a-a84f-41dd-b9b0-0c51761fbddc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-2z7m2" [ac2b4f6a-a84f-41dd-b9b0-0c51761fbddc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.005540038s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (92.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-920584 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-920584 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m32.723875859s)
--- PASS: TestNetworkPlugins/group/calico/Start (92.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-920584 "pgrep -a kubelet"
I1206 09:44:13.565001    9552 config.go:182] Loaded profile config "kindnet-920584": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-920584 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-pplb5" [a4bdd67e-f9e5-41b9-bafb-8d37d82a7096] Pending
helpers_test.go:352: "netcat-cd4db9dbf-pplb5" [a4bdd67e-f9e5-41b9-bafb-8d37d82a7096] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-pplb5" [a4bdd67e-f9e5-41b9-bafb-8d37d82a7096] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.005726964s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-920584 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-920584 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-920584 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-920584 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-920584 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-920584 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (81.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-920584 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
E1206 09:44:39.045750    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/old-k8s-version-433324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:44:39.052193    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/old-k8s-version-433324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:44:39.064092    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/old-k8s-version-433324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:44:39.085533    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/old-k8s-version-433324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:44:39.128404    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/old-k8s-version-433324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:44:39.210334    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/old-k8s-version-433324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:44:39.372376    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/old-k8s-version-433324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:44:39.693900    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/old-k8s-version-433324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:44:40.336134    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/old-k8s-version-433324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-920584 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m21.925882322s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (81.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (79.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-920584 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
E1206 09:44:44.180639    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/old-k8s-version-433324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:44:49.302476    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/old-k8s-version-433324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:44:52.792653    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/default-k8s-diff-port-562926/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:44:52.799152    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/default-k8s-diff-port-562926/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:44:52.811058    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/default-k8s-diff-port-562926/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:44:52.832956    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/default-k8s-diff-port-562926/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:44:52.874435    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/default-k8s-diff-port-562926/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:44:52.955915    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/default-k8s-diff-port-562926/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:44:53.117569    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/default-k8s-diff-port-562926/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:44:53.439151    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/default-k8s-diff-port-562926/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:44:54.080763    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/default-k8s-diff-port-562926/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:44:55.362154    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/default-k8s-diff-port-562926/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-920584 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m19.307042149s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (79.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (14.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-b84665fb8-bhjg7" [09d338d0-f019-49d7-9c34-f501a7cc50a1] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1206 09:44:56.582486    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/functional-171063/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:44:57.924080    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/default-k8s-diff-port-562926/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:44:59.543783    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/old-k8s-version-433324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:45:03.045768    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/default-k8s-diff-port-562926/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kubernetes-dashboard-b84665fb8-bhjg7" [09d338d0-f019-49d7-9c34-f501a7cc50a1] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.005386805s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (14.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-b84665fb8-bhjg7" [09d338d0-f019-49d7-9c34-f501a7cc50a1] Running
E1206 09:45:13.287667    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/default-k8s-diff-port-562926/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006390778s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-110097 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-110097 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.47s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-110097 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p no-preload-110097 --alsologtostderr -v=1: (1.155799833s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-110097 -n no-preload-110097
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-110097 -n no-preload-110097: exit status 2 (247.108373ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-110097 -n no-preload-110097
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-110097 -n no-preload-110097: exit status 2 (260.217872ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-110097 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p no-preload-110097 --alsologtostderr -v=1: (1.001146896s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-110097 -n no-preload-110097
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-110097 -n no-preload-110097
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (76.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-920584 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
E1206 09:45:33.770081    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/default-k8s-diff-port-562926/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-920584 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m16.260312208s)
--- PASS: TestNetworkPlugins/group/flannel/Start (76.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-mvmc4" [37612a90-93ec-43dd-b679-02615d1f49db] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003903754s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-920584 "pgrep -a kubelet"
I1206 09:45:48.302966    9552 config.go:182] Loaded profile config "calico-920584": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-920584 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-mg2w2" [3db0d2cb-26bb-4bd6-b607-65cf89ddf8b2] Pending
helpers_test.go:352: "netcat-cd4db9dbf-mg2w2" [3db0d2cb-26bb-4bd6-b607-65cf89ddf8b2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.006340937s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-920584 "pgrep -a kubelet"
I1206 09:45:56.661247    9552 config.go:182] Loaded profile config "custom-flannel-920584": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-920584 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-dbs9q" [5d3a9291-2185-4b3e-b8e1-8bf96ce80d1e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1206 09:46:00.986639    9552 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/old-k8s-version-433324/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-dbs9q" [5d3a9291-2185-4b3e-b8e1-8bf96ce80d1e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004769679s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-920584 "pgrep -a kubelet"
I1206 09:46:01.436876    9552 config.go:182] Loaded profile config "enable-default-cni-920584": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-920584 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-s9mzr" [38e723fd-c2a7-420d-bc25-c833cb1e1608] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-s9mzr" [38e723fd-c2a7-420d-bc25-c833cb1e1608] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.006533923s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-920584 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-920584 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-920584 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-920584 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-920584 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-920584 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-920584 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-920584 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-920584 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (83.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-920584 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-920584 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m23.770611653s)
--- PASS: TestNetworkPlugins/group/bridge/Start (83.77s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//data (0.27s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//data
=== PAUSE TestISOImage/PersistentMounts//data

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//data
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-688206 ssh "df -t ext4 /data | grep /data"
--- PASS: TestISOImage/PersistentMounts//data (0.27s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/docker (0.16s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-688206 ssh "df -t ext4 /var/lib/docker | grep /var/lib/docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/docker (0.16s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/cni (0.17s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/cni
=== PAUSE TestISOImage/PersistentMounts//var/lib/cni

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/cni
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-688206 ssh "df -t ext4 /var/lib/cni | grep /var/lib/cni"
--- PASS: TestISOImage/PersistentMounts//var/lib/cni (0.17s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/kubelet (0.17s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/kubelet
=== PAUSE TestISOImage/PersistentMounts//var/lib/kubelet

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/kubelet
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-688206 ssh "df -t ext4 /var/lib/kubelet | grep /var/lib/kubelet"
--- PASS: TestISOImage/PersistentMounts//var/lib/kubelet (0.17s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/minikube (0.16s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/minikube
=== PAUSE TestISOImage/PersistentMounts//var/lib/minikube

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/minikube
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-688206 ssh "df -t ext4 /var/lib/minikube | grep /var/lib/minikube"
--- PASS: TestISOImage/PersistentMounts//var/lib/minikube (0.16s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/toolbox (0.18s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/toolbox
=== PAUSE TestISOImage/PersistentMounts//var/lib/toolbox

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/toolbox
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-688206 ssh "df -t ext4 /var/lib/toolbox | grep /var/lib/toolbox"
--- PASS: TestISOImage/PersistentMounts//var/lib/toolbox (0.18s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/boot2docker (0.16s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/boot2docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/boot2docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/boot2docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-688206 ssh "df -t ext4 /var/lib/boot2docker | grep /var/lib/boot2docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/boot2docker (0.16s)

                                                
                                    
x
+
TestISOImage/VersionJSON (0.16s)

                                                
                                                
=== RUN   TestISOImage/VersionJSON
iso_test.go:106: (dbg) Run:  out/minikube-linux-amd64 -p guest-688206 ssh "cat /version.json"
iso_test.go:116: Successfully parsed /version.json:
iso_test.go:118:   iso_version: v1.37.0-1764843329-22032
iso_test.go:118:   kicbase_version: v0.0.48-1764169655-21974
iso_test.go:118:   minikube_version: v1.37.0
iso_test.go:118:   commit: d7bfd7d6d80c3eeb1d6cf1c5f081f8642bc1997e
--- PASS: TestISOImage/VersionJSON (0.16s)

                                                
                                    
x
+
TestISOImage/eBPFSupport (0.16s)

                                                
                                                
=== RUN   TestISOImage/eBPFSupport
iso_test.go:125: (dbg) Run:  out/minikube-linux-amd64 -p guest-688206 ssh "test -f /sys/kernel/btf/vmlinux && echo 'OK' || echo 'NOT FOUND'"
--- PASS: TestISOImage/eBPFSupport (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-vwzvl" [30e3263f-e3a7-4970-b3f1-a3fb180ad669] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004679829s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-920584 "pgrep -a kubelet"
I1206 09:46:43.289964    9552 config.go:182] Loaded profile config "flannel-920584": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-920584 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-5gm57" [5d6e55c0-7df8-41d7-94b5-52e9cdc9587d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-5gm57" [5d6e55c0-7df8-41d7-94b5-52e9cdc9587d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.005318986s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-920584 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-920584 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-920584 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-920584 "pgrep -a kubelet"
I1206 09:47:42.087654    9552 config.go:182] Loaded profile config "bridge-920584": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-920584 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-hjxc5" [b25938df-502c-4fdc-b8de-f75670e8b70b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-hjxc5" [b25938df-502c-4fdc-b8de-f75670e8b70b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004139517s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-920584 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-920584 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-920584 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    

Test skip (52/431)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0
38 TestAddons/serial/Volcano 0.31
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
56 TestAddons/parallel/AmdGpuDevicePlugin 0
60 TestDockerFlags 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
136 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
137 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
138 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
139 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
140 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
141 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
142 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
209 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv 0
210 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0.01
221 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService 0.01
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 0.01
223 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.01
258 TestGvisorAddon 0
280 TestImageBuild 0
308 TestKicCustomNetwork 0
309 TestKicExistingNetwork 0
310 TestKicCustomSubnet 0
311 TestKicStaticIP 0
343 TestChangeNoneUser 0
346 TestScheduledStopWindows 0
348 TestSkaffold 0
350 TestInsufficientStorage 0
354 TestMissingContainerUpgrade 0
362 TestStartStop/group/disable-driver-mounts 0.15
372 TestNetworkPlugins/group/kubenet 3.49
380 TestNetworkPlugins/group/cilium 4.76
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.31s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-618522 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.31s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-206851" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-206851
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-920584 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-920584

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-920584

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-920584

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-920584

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-920584

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-920584

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-920584

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-920584

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-920584

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-920584

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-920584"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-920584"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-920584"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-920584

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-920584"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-920584"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-920584" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-920584" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-920584" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-920584" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-920584" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-920584" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-920584" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-920584" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-920584"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-920584"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-920584"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-920584"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-920584"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-920584" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-920584" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-920584" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-920584"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-920584"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-920584"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-920584"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-920584"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22049-5603/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 06 Dec 2025 09:33:40 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.50.157:8443
name: pause-272844
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22049-5603/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 06 Dec 2025 09:34:34 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.39.82:8443
name: running-upgrade-044478
contexts:
- context:
cluster: pause-272844
extensions:
- extension:
last-update: Sat, 06 Dec 2025 09:33:40 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-272844
name: pause-272844
- context:
cluster: running-upgrade-044478
user: running-upgrade-044478
name: running-upgrade-044478
current-context: ""
kind: Config
users:
- name: pause-272844
user:
client-certificate: /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/pause-272844/client.crt
client-key: /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/pause-272844/client.key
- name: running-upgrade-044478
user:
client-certificate: /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/running-upgrade-044478/client.crt
client-key: /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/running-upgrade-044478/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-920584

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-920584"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-920584"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-920584"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-920584"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-920584"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-920584"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-920584"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-920584"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-920584"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-920584"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-920584"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-920584"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-920584"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-920584"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-920584"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-920584"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-920584"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-920584"

                                                
                                                
----------------------- debugLogs end: kubenet-920584 [took: 3.330463871s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-920584" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-920584
--- SKIP: TestNetworkPlugins/group/kubenet (3.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-920584 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-920584

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-920584

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-920584

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-920584

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-920584

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-920584

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-920584

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-920584

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-920584

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-920584

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-920584"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-920584"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-920584"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-920584

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-920584"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-920584"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-920584" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-920584" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-920584" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-920584" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-920584" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-920584" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-920584" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-920584" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-920584"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-920584"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-920584"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-920584"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-920584"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-920584

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-920584

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-920584" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-920584" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-920584

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-920584

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-920584" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-920584" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-920584" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-920584" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-920584" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-920584"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-920584"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-920584"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-920584"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-920584"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22049-5603/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 06 Dec 2025 09:33:40 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.50.157:8443
name: pause-272844
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22049-5603/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 06 Dec 2025 09:34:34 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.39.82:8443
name: running-upgrade-044478
contexts:
- context:
cluster: pause-272844
extensions:
- extension:
last-update: Sat, 06 Dec 2025 09:33:40 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-272844
name: pause-272844
- context:
cluster: running-upgrade-044478
user: running-upgrade-044478
name: running-upgrade-044478
current-context: ""
kind: Config
users:
- name: pause-272844
user:
client-certificate: /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/pause-272844/client.crt
client-key: /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/pause-272844/client.key
- name: running-upgrade-044478
user:
client-certificate: /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/running-upgrade-044478/client.crt
client-key: /home/jenkins/minikube-integration/22049-5603/.minikube/profiles/running-upgrade-044478/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-920584

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-920584"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-920584"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-920584"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-920584"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-920584"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-920584"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-920584"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-920584"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-920584"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-920584"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-920584"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-920584"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-920584"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-920584"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-920584"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-920584"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-920584"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-920584" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-920584"

                                                
                                                
----------------------- debugLogs end: cilium-920584 [took: 4.567857766s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-920584" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-920584
--- SKIP: TestNetworkPlugins/group/cilium (4.76s)

                                                
                                    
Copied to clipboard